CN117376652B - Live scene interactive tracing method and device, computer equipment and storage medium - Google Patents

Live scene interactive tracing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN117376652B
CN117376652B CN202311668072.9A CN202311668072A CN117376652B CN 117376652 B CN117376652 B CN 117376652B CN 202311668072 A CN202311668072 A CN 202311668072A CN 117376652 B CN117376652 B CN 117376652B
Authority
CN
China
Prior art keywords
live
conversation
clause
broadcasting
tree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311668072.9A
Other languages
Chinese (zh)
Other versions
CN117376652A (en
Inventor
李惠义
李松
孙逸凡
钱玉灏
李亚飞
杨钰敏
马妍
王艳驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Youyou Internet Co ltd
Original Assignee
Shenzhen Youyou Internet Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Youyou Internet Co ltd filed Critical Shenzhen Youyou Internet Co ltd
Priority to CN202311668072.9A priority Critical patent/CN117376652B/en
Publication of CN117376652A publication Critical patent/CN117376652A/en
Application granted granted Critical
Publication of CN117376652B publication Critical patent/CN117376652B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4758End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for providing answers, e.g. voting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/04Segmentation; Word boundary detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • H04N21/2542Management at additional data server, e.g. shopping server, rights management server for selling goods, e.g. TV shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26208Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26291Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for providing content or additional data updates, e.g. updating software modules, stored at the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/47815Electronic shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a method and a device for interactive tracing of a live scene, computer equipment and a storage medium, and belongs to the technical field of computers. In the embodiment of the application, each live conversation clause is used as one node of a conversation tree. If the interaction information exists in the live broadcast scene, interrupting current broadcasting, and broadcasting a reply call corresponding to the interaction information to realize interaction in the live broadcast scene. And when the current broadcasting is interrupted, a target traversal path sequence between the interrupted node and the root node is found out from the conversation tree, and the direct conversation clause of the root node and the target traversal path sequence are stored in a first linear table by taking the interruption times as indexes. And in the time-continuing, the call-leaving operation tree can be queried based on the live call operation clause corresponding to the root node in the first linear table, and then the time-continuing node can be rapidly queried from the call operation tree based on the target traversal path sequence, so that interactive traceability in a live scene is realized. The method and the device reduce the complexity of query tracing and improve the live broadcast interaction efficiency.

Description

Live scene interactive tracing method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and apparatus for interactive tracing of a live scene, a computer device, and a storage medium.
Background
In the live broadcast process, the AI broadcasts a scheduled live broadcast session. For example, the open white call is broadcast in the open white phase and the product call is broadcast in the product phase. If the object interaction requirement is detected, the AI interrupts the broadcasting of the live phone operation and then broadcasts the reply phone operation related to the object interaction. After the interaction is finished, the AI can continue to broadcast after the original interrupted live phone operation.
In the related art, the live broadcasting operation is stored in advance according to a preset sequence, then is broadcasted in sequence, if interaction with an object is needed, the current broadcasting is interrupted, the interrupted sequence is marked, and after the reply is completed, the broadcasting is continued based on the interrupted sequence. The disadvantage of this is that the number of live calls required for live broadcast is large, the number of categories is large, the efficiency of finding continuous calls according to the sequence of marked breaks is low, the location of the breaks is variable (for example, the breaks may be made in the middle of a sentence), the continuity of the interaction is poor, and the subjective experience is poor.
Disclosure of Invention
The main purpose of the embodiment of the application is to provide a method and a device for tracing live scene interaction, a computer device and a storage medium, which can improve the efficiency, continuity and subjective feeling of live interaction.
In order to achieve the above object, a first aspect of an embodiment of the present application provides a method for interactive traceback of a live scene, where the method includes:
acquiring live phone operation clauses layer by layer from a preset phone operation tree, and broadcasting the live phone operation clauses in a live scene; wherein the phone tree comprises a plurality of layers of nodes, and the nodes of each layer correspond to the phone clause;
if the interaction information exists in the live broadcast scene is detected, broadcasting of the live broadcast speech operation clause is interrupted, a reply speech operation corresponding to the interaction information is broadcasted, the interruption times are recorded, and based on the node corresponding to the live broadcast speech operation clause, an interruption node is obtained;
performing medium-order traversal on the telephone tree based on the breaking node to obtain a target traversal path sequence between the root node of the telephone tree and the breaking node;
taking the breaking times as an index, and storing a live conversation sentence corresponding to the root node and the target traversal path sequence in a first linear table;
If the fact that the broadcasting of the reply voice operation is finished is detected, reading the last breaking times as an index through reverse order, and reading out a live voice operation clause corresponding to the root node and the target traversal path sequence from the first linear table;
inquiring the speaking tree based on the live speaking clause corresponding to the root node, and traversing the speaking tree based on the target traversing path sequence to obtain a continuous reporting node;
and continuously broadcasting the live conversation clause in the conversation tree in the live broadcasting scene by taking the live conversation clause corresponding to the continuous report node as a start.
This embodiment has the following beneficial effects: each live conversation sentence is taken as a node of the conversation tree. If the interaction information exists in the live broadcast scene, broadcasting of live broadcast operation clauses is interrupted, a reply operation corresponding to the interaction information is broadcasted, and interaction in the live broadcast scene is achieved. When the current broadcasting is interrupted, the interruption node can be marked in time based on a special tree structure in the conversation tree, and the conversation sentence of the root node and the interruption node are stored in a first linear table by taking the interruption times as indexes. If the interrupt broadcast is multiple times, the first linear table stores live broadcast operation phrases and interrupt nodes of the root node corresponding to the interrupt broadcast multiple times. And reading the first linear table by taking the last interruption times as indexes in reverse order during continuous broadcasting. The live phone sentence corresponding to the root node can be read out to query the phone tree, and the target traversal path sequence indicates the traversal path from the root node to the break node in the phone tree. Therefore, the dialogue tree does not need to traverse the dialogue tree layer by layer and node by node, the efficiency of finding the continuous report nodes from the dialogue tree is greatly improved, and the efficiency of interactive tracing of the live broadcast scene is improved. In addition, as the live conversation clause is used as one node of the conversation tree for broadcasting, the possibility of interrupting broadcasting in the middle of one sentence is greatly reduced, and the continuity and subjective feeling of live conversation interaction are improved.
Optionally, information of a plurality of phone trees of the same second phone category is stored in a second linear table, the information including a live phone clause of a root node of the phone tree and a head pointer of the root node;
the step of inquiring the phone operation tree based on the live phone operation clause corresponding to the root node comprises the following steps:
determining the second conversation category based on the live conversation clause corresponding to the root node;
reading the second linear table corresponding to the second conversation category by taking the live conversation clause corresponding to the root node as an index to obtain a head pointer of the root node;
and inquiring the voice operation tree based on the head pointer of the root node.
Optionally, the live phone operation clause includes a plurality of small clauses, each small clause is sequentially stored in a linear linked list, and the node of each layer stores the linear linked list corresponding to the live phone operation clause;
the step of continuing to broadcast the live conversation clause in the conversation tree in the live broadcasting scene by taking the live conversation clause corresponding to the continuous report node as a start, comprising the following steps:
obtaining a broken small clause broken in the live conversation process clause;
Searching the linear linked list corresponding to the follow-up node based on the breaking small clause to obtain a follow-up small clause;
and starting with the continuous report small clause, and continuously broadcasting the live conversation clause in the conversation tree in the live broadcasting scene.
Optionally, if it is detected that the interaction information exists in the live broadcast scene, broadcasting of the live broadcast session clause is interrupted, and a reply session corresponding to the interaction information is broadcast, including:
when the number of the viewers is smaller than or equal to the first number, detecting keywords of barrage information in the live broadcast scene, and broadcasting the answer speech according to the keywords of the barrage information;
when the number of the watched persons is larger than the first number and smaller than or equal to the second number, bullet screen information in the live broadcasting scene is detected, and the answering operation is broadcasted according to the bullet screen information; wherein the first number is less than the second number;
when the number of the watched persons is larger than the second number, detecting that bullet screen information in the live broadcast scene triggers a preset keyword mark, and broadcasting the answer speech according to the bullet screen information;
when the number of the watched persons is greater than or equal to the third number and less than or equal to the fourth number, detecting bullet screen information triggering a preset keyword mark in the live broadcast scene at intervals of a first period, and broadcasting the reply call according to the bullet screen information; wherein the third number is greater than the second number, the third number being less than the fourth number;
When the number of the viewers is larger than the fourth number, taking the question with the largest questioning frequency in the preset time as the interaction information;
when the number of persons entering the live broadcast scene in the second period is larger than a preset threshold value of the number of persons entering the live broadcast scene, if the currently broadcast product is a non-hot product, broadcasting a welfare and a conversation corresponding to the hot complaint;
when the number of the entering persons entering the live broadcasting scene in the second period is larger than a preset threshold value of the number of the entering persons, if the currently broadcasted item is a hot item, broadcasting a welfare operation and continuing to broadcast a conversation operation corresponding to the hot item.
Optionally, before the step of obtaining the live phone clause layer by layer from the preset phone tree, the method further includes generating the phone tree, where generating the phone tree includes:
acquiring a live phone with a first phone category;
the live phone operation is divided according to a first preset symbol contained in the live phone operation, and a plurality of live phone operation divided sentences are obtained;
and generating the nodes of each layer by the live conversation clause based on the sequence of each live conversation clause in a plurality of live conversation clauses, so as to obtain the conversation tree with the first conversation category.
Optionally the acquiring a live phone with a first phone category includes:
determining the category of the required conversation extraction, and obtaining the first conversation category;
determining a speaking index value according to the first speaking category;
and acquiring the live phone from a phone library according to the phone index value.
Optionally, the determining a speaking index value according to the first speaking category includes:
determining the first word pinyin initial letter of the first conversation category;
acquiring an ASCII code corresponding to the initial letter of the first word pinyin to obtain a target ASCII code;
and obtaining the speaking index value according to the remainder of the modular operation of the target ASCII code and the preset letter number.
In order to achieve the above object, a second aspect of the embodiments of the present application provides a device for interactive traceback of a live scene, where the device includes:
the first broadcasting module is used for acquiring live conversation clauses layer by layer from a preset conversation tree and broadcasting the live conversation clauses in a live broadcasting scene; wherein the phone tree comprises a plurality of layers of nodes, and the nodes of each layer correspond to the phone clause;
the breaking module is used for breaking the broadcasting of the live broadcasting operation clause if the interaction information exists in the live broadcasting scene, broadcasting the reply operation corresponding to the interaction information, recording breaking times, and obtaining breaking nodes based on the nodes corresponding to the live broadcasting operation clause;
The first traversing module is used for performing medium-order traversing on the telephone tree based on the breaking node to obtain a target traversing path sequence between the root node of the telephone tree and the breaking node;
the storage module is used for storing the live conversation clause corresponding to the root node and the target traversal path sequence in a first linear table by taking the breaking times as indexes;
the reading module is used for reading the last interruption times as indexes through reverse order if the fact that the reply call operation is broadcasted is detected to be ended, and reading out a live call operation clause corresponding to the root node and the target traversal path sequence from the first linear table;
the second traversing module is used for inquiring the conversation tree based on the direct conversation clause corresponding to the root node, and traversing the conversation tree based on the target traversing path sequence to obtain a continuous report node;
and the second broadcasting module is used for continuously broadcasting the live conversation clause in the conversation tree in the live broadcasting scene by taking the live conversation clause corresponding to the continuous broadcasting node as a start.
To achieve the above object, a third aspect of the embodiments of the present application proposes a computer device comprising a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for enabling a connection communication between the processor and the memory, the program, when executed by the processor, implementing the method according to the first aspect.
To achieve the above object, a fourth aspect of the embodiments of the present application proposes a storage medium, which is a computer-readable storage medium, for computer-readable storage, the storage medium storing one or more programs executable by one or more processors to implement the method described in the first aspect.
Drawings
Fig. 1 is a flowchart of a method for interactive traceability of a live scene provided in an embodiment of the present application;
FIG. 2 is a flow chart of step 101 in FIG. 1;
FIG. 3 is a flow chart of step 201 in FIG. 2;
FIG. 4 is a flow chart of step 302 in FIG. 3;
FIG. 5 is a schematic diagram of a specific implementation of step 302 in FIG. 3;
FIG. 6 is a schematic diagram of a head pointer of a root node of multiple phone trees of the same second phone class stored in a linear linked list;
FIG. 7 is a schematic diagram of a plurality of small clauses stored in a linear linked list;
FIG. 8 is a flow chart of step 107 in FIG. 1;
FIG. 9 is a flow chart of step 108 in FIG. 1;
fig. 10 is a block diagram of a module structure of a device for interactive tracing of a live scene provided in an embodiment of the present application;
fig. 11 is a schematic hardware structure of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
First, the nouns referred to in this application are parsed:
artificial intelligence (Artificial Intelligence, AI): is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding the intelligence of people; artificial intelligence is a branch of computer science that attempts to understand the nature of intelligence and to produce a new intelligent machine that can react in a manner similar to human intelligence, research in this field including robotics, language recognition, image recognition, natural language processing, and expert systems. Artificial intelligence can simulate the information process of consciousness and thinking of people. Artificial intelligence is also a theory, method, technique, and application system that utilizes a digital computer or digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Tree: is a data structure, which is a set with hierarchical relation formed by n (n is more than or equal to 0) finite nodes. It is called a "tree" because it looks like an inverted tree, i.e. it is root up and leaf down. It has the following characteristics: each node has zero or more child nodes; nodes without parent nodes are called root nodes; each non-root node has and has only one parent node; in addition to the root node, each child node may be divided into a plurality of disjoint sub-trees.
Linear table: is the most basic, simplest, and most commonly used data structure. A linear list (linear list) is one type of data structure, and a linear list is a finite sequence of n data elements with the same characteristics. The relationship between data elements in a linear table is a one-to-one relationship, i.e. other data elements are end-to-end, except for the first and last data elements.
Linear linked list: a linear table having a linked memory structure is provided which stores data elements in the linear table in a set of memory locations having arbitrary addresses, logically adjacent elements not being physically required nor being adjacent, and not being capable of random access. Generally described by nodes: node (representing data element) =data field (image of data element) +pointer field (indicating successor element storage location).
The embodiment of the application provides a live scene interaction tracing method, which relates to the technical field of artificial intelligence. The method for interactive tracing of the live broadcast scene can be applied to the terminal, can be applied to the server side, and can also be software running in the terminal or the server side. In some embodiments, the terminal may be a smart phone, tablet, notebook, desktop, etc.; the server side can be configured as an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms and the like; the software may be an application or the like of a method for realizing live scene interaction traceback, but is not limited to the above form.
At present, the AI is live broadcast, and the AI needs to answer the effective bullet screen information of the user in real time in the live broadcast process or broadcast the related welfare when the number of viewers increases dramatically. If the AI is in the commodity stage or other commodity information broadcasting time period, where to interrupt the AI broadcasting to answer the user barrage information and where to continue the AI broadcasting, a reasonable broadcasting sequence and a broadcasting rule need to be set. As is well known, in the live broadcast sales process, live broadcast telephone operation at different stages is different, however, how to let AI effectively judge the stage to which the interrupted broadcast content belongs, and how to query and continue the broadcast telephone operation content in a data storage area storing the live broadcast telephone operation. Aiming at the defects existing in the AI live broadcast process, the embodiment of the application provides a live broadcast scene interaction tracing method and device, computer equipment and storage medium. The following embodiments are specifically described, and a method for interactive tracing of a live scene in the embodiments of the present application is first described.
Fig. 1 is an optional flowchart of a method for interactive traceability of a live scene provided in an embodiment of the present application, where the method in fig. 1 may include, but is not limited to, including:
step 102, acquiring live phone operation clauses layer by layer from a preset phone operation tree, and broadcasting the live phone operation clauses in a live scene; the phone operation tree comprises a plurality of layers of nodes, and the nodes of each layer correspond to the phone operation clauses;
step 103, if the presence of interaction information in the live broadcast scene is detected, interrupting broadcasting of live broadcast operation clauses, broadcasting a reply operation corresponding to the interaction information, recording the interruption times, and obtaining interruption nodes based on nodes corresponding to the live broadcast operation clauses;
step 104, performing medium-order traversal on the conversation tree based on the breaking node to obtain a target traversal path sequence between the root node and the breaking node of the conversation tree;
step 105, taking the breaking times as an index, and storing the live phone operation clause corresponding to the root node and the target traversal path sequence in a first linear table;
step 106, if the end of the broadcast reply call is detected, reading the last interruption times as indexes by reverse order, and reading out a live call clause and a target traversal path sequence corresponding to the root node from the first linear table;
Step 107, inquiring the outgoing call tree based on the live call sentence corresponding to the root node, and traversing the call tree based on the target traversal path sequence to obtain the continuous report node;
and step 108, starting with the live conversation clause corresponding to the continuous report node, and continuously broadcasting the live conversation clause in the conversation tree in the live broadcast scene.
In one embodiment, before step 102, the method for interactive traceability of a live scene according to one embodiment of the present application may further include: step 101, generating a speech tree. This step is optional because there are other ways to obtain the phone tree in addition to generating the phone tree, e.g., the phone tree is generated in advance in other applications or servers, which can also be obtained by requesting other applications or servers.
An advantage of embodiments of steps 101-108 is that the live-phone clause corresponding to the root node read from the first linear table may query the phone-phone tree, while the target traversal path sequence indicates the traversal path of the root node to the break node in the phone-phone tree. Therefore, the dialogue tree does not need to traverse the dialogue tree layer by layer and node by node, and the efficiency of finding the continuous report nodes from the dialogue tree is greatly improved, so that the efficiency of interactive tracing in the live broadcast scene is improved. In addition, as the live conversation clause is used as one node of the conversation tree for broadcasting, the possibility of interrupting broadcasting in the middle of one sentence is greatly reduced, and the continuity and subjective feeling of live conversation interaction are improved.
The tree of the present embodiments is typically a multi-way tree. In the speaking tree of the embodiment of the present disclosure, the speaking tree includes a plurality of nodes, and the plurality of nodes include at least one leaf node and one root node, and typically include a plurality of intermediate nodes between the leaf node and the root node.
In one embodiment, referring to fig. 2, step 101 includes:
step 201, acquiring a live phone with a first phone category;
step 202, carrying out clause on the live phone operation according to a first preset symbol contained in the live phone operation to obtain a plurality of live phone operation clauses;
step 203, generating nodes of each layer of live phone clauses based on the sequence of each live phone clause in the plurality of live phone clauses, so as to obtain a phone tree with a first phone category.
In step 201, live talk refers to text data for live. The live phone operation comprises open white phone operation, product phone operation, hook phone operation, suffocation phone operation, call-induced phone operation and the like. The first conversation category indicates a category of live conversation. For example, for open white speech, the first class of speech is open white. Also for example, for the comment, the first category of comments is the item. There is a certain link between different categories of live phone calls, such as the post-praise call or hook call.
In one example, the open white call operation is "happy back to the first place," welcome the first place to come to the live broadcast room, i am the anchor small u ". The product is provided with a plurality of value added services, such as free calling, free answering, global communication and the like, so that the communication of the user is more convenient. The aphrodisiac is "remember to grasp time to purchase products of your heart instrument", and as long as you input own wanted numbers or requirements in the input box, the robot small u can automatically screen out and pull products of your mind.
In one embodiment, referring to fig. 3, step 201 includes:
step 301, determining the category of the required extraction speech surgery, and obtaining a first speech surgery category;
step 302, determining a speaking index value according to a first speaking category;
step 303, obtaining the live phone from the phone library according to the phone index value.
In step 301, the category of the session may be open, the item, etc., as described above. The categories of required utterances are not the same at different stages of live. In one example, the category generally required in the beginning of the live broadcast is the on-site, the category that may be required in the middle stage is the item, and then the category that is required is the order. Therefore, the category of the required voice extraction can be determined according to the stage of live broadcasting, the category of the required voice extraction can also be determined according to the actual requirement of the host, and finally the first voice category is obtained.
At step 302, the different first microphone categories correspond to different microphone index values.
In one embodiment, referring to FIG. 4, step 302 comprises:
step 401, determining first word pinyin initial letters of a first conversation category;
step 402, acquiring ASCII codes corresponding to initial letters of first word pinyin to obtain target ASCII codes;
and step 403, performing modular operation remainder according to the target ASCII code and the preset letter number to obtain a speaking index value.
In one example, as shown in FIG. 5, a library of a plurality of different first phone categories is provided in the phone library. The first phonetic initial letter of the first phonetic alphabet with the first phonetic class of "Kai-Ding-Bai" is "K/K"; the first phonetic initial letter of the first phonetic alphabet with the first phonetic class of 'said article' is 'S/S'; the first phonetic initial letter of the first phonetic alphabet with the first phonetic class being hook is G/G; the first phonetic initial letter of the first phonetic alphabet with the first phonetic class of "hastening a list" is "C/C"; the first phonetic initial letter of the first phonetic alphabet with the first phonetic class of "list" is "B/B".
As shown in fig. 5, for the first pinyin initial letter "k", the corresponding ASCII code is 107, i.e., the target ASCII code is 107. For the first pinyin initial letter "s", the corresponding ASCII code is 115, i.e., the target ASCII code is 115. For the first pinyin initial letter "g", the corresponding ASCII code is 103, i.e., the target ASCII code is 103. For the initial letter of the first word pinyin is "c", the corresponding ASCII code is 99, i.e., the target ASCII code is 99. For the first pinyin initial letter "b", the corresponding ASCII code is 98, i.e., the target ASCII code is 98.
As shown in fig. 5, it is assumed that the preset number of letters is 26. And performing modular operation according to the target ASCII codes of 107 and 26, wherein the remainder is 3, and the speaking index value is 3. And performing modular operation according to the target ASCII codes 115 and 26, wherein the remainder is 11, and the speaking index value is 11. And performing modular operation according to the target ASCII codes 103 and 26, wherein the remainder is 26, and the speaking index value is 25. And performing modular operation according to the target ASCII codes of 99 and 26, wherein the remainder is 21, and the speaking index value is 21. And performing modular operation according to the target ASCII codes 98 and 26, wherein the remainder is 20, and the speaking index value is 20.
The embodiment of the steps 401-403 has the advantage that the determination of the phone index value is realized by using the ASCII code of the initial letter of the first pinyin, and the efficiency of acquiring the live phone based on the phone index value is improved while the effective distinction between the live phones of different first phone categories is realized.
After determining the session index value in step 302, in step 303, a live session is obtained from the session library according to the session index value. The word stock is a word stock with one-to-one correspondence formed by summing ASCII and 26 English letters in corresponding independent spaces. The speaking library refers to an independent space for storing different speaking operations, such as a white space. A plurality of open-field white-word techniques are stored in a space corresponding to the open-field white-word techniques, and different open-field white-word techniques refer to an independent tree. For example, for a section of open-caption, a plurality of live-caption clauses are obtained by using periods as division marks, and then each live-caption clause is sequentially stored on the tree as a node to obtain an open-caption tree.
An advantage of the embodiments of steps 301-303 is that live speech can be searched out of a speech library based on the speech index value, improving the search efficiency.
After the live phone operation is obtained in step 201, in step 202, the live phone operation is divided according to a first preset symbol included in the live phone operation, so as to obtain a plurality of live phone operation divided sentences.
The first preset symbol refers to punctuation marks included in the live phone. The clause refers to a paragraph or a composition of a sentence, and as a simple example, a sentence is marked. ", I! "? "etc. Each type of live conversation may therefore be followed first. ", I! "? And carrying out clauses to obtain a plurality of live phone operation clauses. Thus, the first preset symbol of an embodiment of the present application may be. ", I! "? "etc.
It should be noted that one or more "," may exist in a sentence, and thus may be based on the sentence. ", I! "? After the clauses are equally processed, the clauses are subdivided according to the method, and a plurality of small clauses are obtained. For example, a live phone phrase of "AAAA, BBBB, CCCC, DDDD" would be divided into 4 small phrases.
In step 203, based on the order of each live phone clause in the plurality of live phone clauses, the live phone clause is generated into nodes of each layer, resulting in a phone tree with a first phone category.
In one example, the basis is that. The method comprises the steps of dividing different sections of live phone operation, storing phone operation contents in a memory structure of a phone operation tree, and storing head pointers of root nodes of different expressions of the same phone operation in a linear linked list mode. Care should be taken that: each node of the conversation tree represents a sentence in a conversation session. The linear chain table is used for storing head pointers of root nodes of different trees, such as the product is a telephone tree, the hook is a telephone tree, the call is a telephone tree, and other live telephone technologies can be classified into specific telephone trees.
In one example, as shown in FIG. 6, a phone tree with a first phone category of "white" includes white-on tree 0, white-on tree 1, white-on tree 2, white-on tree 3, white-on tree 4, and so on. The second linear table includes a data (data) portion and a pointer portion. The data portion in the second linear table stores the first sentence content of each open-field white tree. The pointer section in the second linear table stores head pointers of root nodes of respective open white trees. For example, the data portion stores, in order, the first sentence of the opening white tree 0, the first sentence of the opening white tree 1, the first sentence of the opening white tree 2, the first sentence of the opening white tree 3, the first sentence of the opening white tree 4, and the like. The pointer part sequentially stores a head pointer Rootpoint0 of a root node of the opening white tree 0, a head pointer Rootpoint1 of a root node of the opening white tree 1, a head pointer Rootpoint2 of a root node of the opening white tree 2, a head pointer Rootpoint3 of a root node of the opening white tree 3, and a head pointer Rootpoint4 of a root node of the opening white tree 4. Fig. 6 also shows the tree structure of the open field white tree 0 and the tree structure of the open field white tree 1. The open white tree 0 can be found from the head pointer Rootpoint0 of the root node of the open white tree 0 stored in the second linear table. The head pointer Rootpoint1 of the root node of the open white tree 1 stored according to the second linear table may find the open white tree 1..
As shown in fig. 6, for either the open white tree 0 or the open white tree 1, the root node corresponds to the live talk clause S 1 The live phone operation clause corresponding to the node of the next layer of the root node is { S } 1-2 ,S 1-2’ }。S 1-2 The live speaking sentence corresponding to the node of the next layer is { S ] 1-2-3 ,S 1-2-3’ ,S 1-2-3’’ }。S 1-2-3 The live speaking sentence corresponding to the node of the next layer is { S ] 1-2-3-4 }。S 1-2-3’ The live speaking sentence corresponding to the node of the next layer is { S ] 1-2-3’-4’ }。S 1-2-3’’ The live speaking sentence corresponding to the node of the next layer is { S ] 1-2-3’’-4’’ ,S 1-2-3’’-4’’’ }。S 1-2’ The live speaking sentence corresponding to the node of the next layer is { S ] 1-2’-33 ,S 1-2’-33’ ,S 1-2’-33’’ }。S 1-2’-33 The live speaking sentence corresponding to the node of the next layer is { S ] 1-2’-33-44 }。S 1-2’-33’’ The live speaking sentence corresponding to the node of the next layer is { S ] 1-2’-33’’-44’ ,S 1-2’-33’’-44’’ }. It should be noted that each node corresponds to a different live-talk clause in the live-talk, e.g. S 1-2 And S is equal to 1-2’ Refers to different live talk phrases.
It should be noted that the open-field white speech operation has more than one paragraph or sentence. To more effectively store a plurality of different open-field utterancesBy observing the content of the open-word white, it is found that the open-word white is composed of a plurality of sentences, and a sentence may be separated by a period, an exclamation mark or a question mark. In particular, considering that partial open Chinese has the same initial sentence, two or more open Chinese having the same initial sentence may be stored in one tree. For example, live talk clause S in FIG. 6 1-2 And live conversation clause S 1-2’ Corresponding first sentences are the same, so the root nodes are S 1 . With the above in mind, the head pointers of the root nodes of different phone trees in a similar phone forest (e.g., a white phone forest) may be stored in the pointer portion of the second linear table. In this description, for the same initial sentence, the data portion stores the initial sentence corresponding to the root node of the initial tree.
In one example, a sentence is composed of multiple sentences, and a sentence is composed of one or more small sentences divided by commas. In order to more precisely locate a sentence in a sentence, nodes in a specific class of the spoken tree (spoken sentences) are stored in a linear linked list according to commas as division marks. In particular, each clause is stored in a linear linked list in sequence, for the purpose of precisely and efficiently locating a particular clause. As shown in fig. 7, the linear linked list includes a data field (data) and a pointer field (first). The open white tree 0 includes multiple levels of nodes, each corresponding to a live conversation clause. Let us assume that the live-talk clause includes small clause s1 (Sentence 1), small clause s2 (Sentence 2), small clause s3 (Sentence 3), small clause s4 (Sentence 4), and small clause s5 (Sentence 5). The data fields in the linear chain table sequentially store a small clause s1, a small clause s2, a small clause s3, a small clause s4 and a small clause s5. The addresses of the clauses pointed to by the pointer after each clause are stored in the pointer field in the linear linked list. For example, the pointer after clause s1 points to clause s2, the pointer after clause s2 points to clause s3, the pointer after clause s3 points to clause s4, and the pointer after clause s4 points to clause s5.
An advantage of the embodiments of steps 201-203 is that the efficiency of obtaining live phone phrases from a phone tree can be improved by storing the live phone in each node of the phone tree in the form of live phone phrases.
After generating the speaking tree in step 101, in step 102, acquiring live speaking clauses layer by layer from a preset speaking tree, and broadcasting the live speaking clauses in a live scene; the phone operation tree comprises multiple layers of nodes, and the nodes of each layer correspond to the phone operation clauses. In one example, starting from the root node of the phone tree, each level of nodes may be accessed in a preamble traversal manner to obtain a live phone clause. The specific traversal mode may be set according to actual requirements, and embodiments of the present application are not specifically limited.
In step 103, if the presence of the interaction information in the live broadcast scene is detected, broadcasting of the live broadcast operation clause is interrupted, the reply operation corresponding to the interaction information is broadcasted, the interruption times are recorded, and the interruption nodes are obtained based on the nodes corresponding to the live broadcast operation clause. In an example, a live broadcast participation object (also called a user) starts a client, and after logging in a personal live broadcast account registered by a live broadcast platform, the client establishes communication connection with a server so as to realize data interaction between the client and the server and meet live broadcast requirements of the user. If a user can select any live broadcasting room to enter, a live broadcasting scene of the live broadcasting room is displayed on a live broadcasting terminal used by the user, the live broadcasting content of a main broadcasting of the live broadcasting room can be displayed on the live broadcasting scene, and a live broadcasting speech sentence is broadcasted.
In one example, the object interaction information is a barrage, and the object interaction information may be the number of viewers joining the live scene.
In an embodiment, if it is detected that the interaction information exists in the live broadcast scene, broadcasting of the live broadcast session clause is interrupted, and a reply session corresponding to the interaction information is broadcasted, including:
when the number of the watched persons is smaller than or equal to the first number, detecting keywords of barrage information in the live scene, and broadcasting a reply call according to the keywords of the barrage information;
when the number of the watched persons is larger than the first number and smaller than or equal to the second number, bullet screen information in the live broadcasting scene is detected, and a reply call is broadcasted according to the bullet screen information; wherein the first number is less than the second number;
when the number of the watched persons is larger than the second number, detecting that bullet screen information in the live scene triggers a preset keyword mark, and broadcasting a reply call according to the bullet screen information;
when the number of the watched persons is greater than or equal to the third number and less than or equal to the fourth number, detecting bullet screen information triggering a preset keyword mark in the live broadcast scene at intervals of a first period, and broadcasting a reply call according to the bullet screen information; wherein the third number is greater than the second number, the third number being less than the fourth number;
When the number of the viewers is larger than the fourth number, taking the question with the largest questioning frequency in the preset time as the interaction information;
when the number of persons entering a live scene in the second period is larger than a preset threshold value of the number of persons entering the live scene, if the currently broadcast product is a non-hot product, a welfare operation is broadcast and a conversation operation corresponding to a hot complaint product is broadcast;
when the number of persons entering the live broadcast scene in the second period is larger than a preset threshold value of the number of persons entering the live broadcast scene, if the currently broadcasted product is a hot product, broadcasting a welfare and continuing to broadcast a conversation corresponding to the hot complaint.
The embodiment has the advantages that for different stages of increasing the number of viewers in the live broadcast process, different reply rules are set, and the reply flexibility and accuracy are greatly improved.
The above embodiment specifically includes the following cases in practical application:
(1) If the number of people watching is increased suddenly:
when the number of viewers < = 10 (first number): broadcasting a reply call according to the keywords of the barrage information; if no bullet screen exists, broadcasting the live phone operation clause all the time;
when the number of viewers >10 (first number), and < = 50 (second number): if the bullet screen exists, the current broadcasting is interrupted, the problem is replied, and after the replying is completed, the broadcasting is continued.
When the number of viewers >50 (second number): if a bullet screen exists and the bullet screen is related to the current content (each complaint has a keyword mark), a problem is replied; irrelevant, the system does not reply.
When the number of viewers > =100 (third number), and <1000 (fourth number): a currently associated bullet screen is returned for a first period (configurable).
When the number of viewers >1000 (fourth number): find out more questions to answer within a predetermined time (configurable).
(2) In case of a sudden push, for example, the number of goals in 5s (second period) is > one fifth of the total number of people (preset goal threshold):
when the current non-thermal complaints are: breaking the current complaint, and marking a breaking position; red-pack, play welfare; and playing the heat complaint without recovering the barrage.
When the current heat complaints are: red-pack, play welfare; and continuing to play the heat complaint without replying the barrage.
At step 104, a medium traversal is performed based on the break node dialog tree, resulting in a sequence of target traversal paths between the root node of the dialog tree to the break node. The order of the middle order traversal is to traverse the left subtree first, then access the root node, and finally traverse the right subtree. For example, referring to FIG. 7, if the breaking node is S 1-2-3-4 The corresponding node, the target traversal path sequence is {1-2-3-4}. If the broken node is S 1-2'-33-44 The corresponding node, the target traversal path sequence is {1-2' -33-44}.
In step 105, the number of breaks is used as an index, and the live-talk clause corresponding to the root node, and the target traversal path sequence are stored in the first linear table. In this way, the breaking times can be used as an index to uniquely identify the sequence of breaking the broadcasting. Then, if it is detected that the broadcast reply session is finished, the last interruption time is read as an index by reverse order, and a live session phrase corresponding to the root node and a target traversal path sequence are read from the first linear table in step 106. Therefore, when the time needs to be reported, the corresponding root node and target traversal path sequence can be quickly found from the first linear table by using the breaking times, so that the searching efficiency is improved.
In one embodiment, information of a plurality of phone trees of the same second phone category is stored in the second linear table, the information including a live phone clause of a root node of the phone tree and a head pointer of the root node. Referring to fig. 8, step 107 includes:
step 501, determining a second conversation category based on a live conversation sentence corresponding to the root node;
Step 502, taking a live phone sentence corresponding to the root node as an index, and reading a second linear table corresponding to a second phone category to obtain a head pointer of the root node;
step 503, query the outgoing tree based on the head pointer of the root node.
Specifically, when querying the stored data of a specific node in a certain phone tree, firstly, judging that the data belongs to a specific category of a phone library according to the queried live phone clause. And assuming that the determined second speech category is the open white, acquiring a second linear table corresponding to the open white. And then taking the live conversation clause corresponding to the root node as an index, and reading a second linear table corresponding to the open field white to obtain a head pointer of the root node. Finally, based on head pointer of root node, querying out operation tree.
The advantage of the embodiments of steps 501-503 is that the outgoing tree can be queried quickly by using the live speech clause of the root node and the head pointer of the root node stored in the second linear table, thereby improving the query efficiency.
In one embodiment, the live speech clause includes a plurality of small clauses, each small clause is stored in the linear linked list in sequence, and the nodes of each layer store the linear linked list corresponding to the live speech clause. After querying the outgoing tree, a follow-up node is found from the outgoing tree according to the target traversal path sequence. If a live speech clause is queried, the query can retrieve the content to be queried. If a small clause in a live phone operation clause is used, the linear linked list stored in the continuous report node needs to be retrieved again on the basis of traversing the continuous report node until a specific small clause is queried.
In particular implementations of this embodiment, referring to fig. 9, step 108 includes:
step 601, obtaining a broken small clause broken in a live phone operation clause;
step 602, searching a linear linked list corresponding to the continuous report node based on the small breaking clause to obtain the continuous report clause;
step 603, starting with the follow-up small clause, continuing to play the live-speaking clause in the speaking tree in the live-speaking scene.
In step 601, the break phrase is already stored in the first linear table along with the live-talk phrase corresponding to the root node and the target traversal path sequence, so that the break phrase can be obtained from the first linear table at the time of the follow-up.
The advantage of the embodiment of steps 601-603 is that the live phone operation clause is divided into a plurality of small clauses, and the small clauses are stored in the linear linked list in advance in the form of a linked list, and the retrieval efficiency can be improved by retrieving the linear linked list. In addition, broadcast with little clause form, can reduce the waiting time between detecting that live broadcast scene exists interactive information and broadcasting the answer speech art, further improve interactive efficiency.
In summary, the innovation of the embodiment of the present application is that: firstly, the linear list, the tree structure and the linear linked list are mixed together, so that finer data slicing is effectively carried out on the live broadcasting operation, and a specific category of the live broadcasting operation or a small clause in the category of the live broadcasting operation can be accurately positioned and inquired by the storage mode. Secondly, the query efficiency is improved, when the answer objects interact with information in the AI live broadcast process, the AI live broadcast is marked to a certain sentence breaking position of a specific conversation, the breaking position and the data content are sent to the background for storage, and after the answer is finished, the breaking position and the data content are queried in the data storage and query mode, so that the breaking position can be continuously broadcasted. And the AI live broadcast effect is improved.
Referring to fig. 10, the embodiment of the present application further provides a device for tracing live scene interaction, which may implement the method for tracing live scene interaction, and fig. 10 is a block diagram of a module structure of the device for tracing live scene interaction provided in the embodiment of the present application, where the device includes: a generating module 701, a first broadcasting module 702, a breaking module 703, a first traversing module 704, a storing module 705, a reading module 706, a second traversing module 707, and a second broadcasting module 708. Wherein, the generating module 701 is configured to generate a speech tree; the first broadcasting module 702 is configured to acquire live phone operation clauses layer by layer from a preset phone operation tree, and broadcast the live phone operation clauses in a live scene; the phone operation tree comprises a plurality of layers of nodes, and the nodes of each layer correspond to the phone operation clauses; the breaking module 703 is configured to break broadcasting of the live-broadcast-operation clause if it is detected that the live-broadcast scene has the interactive information, broadcast a reply-operation corresponding to the interactive information, record the breaking times, and obtain breaking nodes based on the nodes corresponding to the live-broadcast-operation clause; a first traversing module 704, configured to perform a middle-order traversal based on the break node dialogue tree, to obtain a target traversal path sequence between a root node of the dialogue tree and the break node; the storage module 705 is configured to store, in a first linear table, a live conversation sentence corresponding to a root node and a target traversal path sequence with the number of interruptions as an index; a reading module 706, configured to read, if it is detected that the broadcast reply session has ended, a live session phrase corresponding to the root node and a target traversal path sequence from the first linear table by reading the last interruption count as an index in reverse order; the second traversing module 707 is configured to trace back a speech tree based on the live speech clause corresponding to the root node, and traverse the speech tree based on the target traversal path sequence to obtain a follow-up node; the second broadcasting module 708 is configured to start with the live-speaking clause corresponding to the follow-up node, and continue broadcasting the live-speaking clause in the live-speaking tree in the live-speaking scene.
It should be noted that, the specific implementation manner of the device for tracing back the live scene interaction is basically the same as the specific embodiment of the method for tracing back the live scene interaction, which is not described herein.
The embodiment of the application also provides computer equipment, which comprises: the system comprises a memory, a processor, a program stored in the memory and capable of running on the processor, and a data bus for realizing connection communication between the processor and the memory, wherein the program is executed by the processor to realize the method for interactive tracing of the live broadcast scene. The computer equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
Referring to fig. 11, fig. 11 illustrates a hardware structure of a computer device according to another embodiment, the computer device includes:
the processor 801 may be implemented by a general purpose CPU (Central Processing Unit ), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided by the embodiments of the present application;
the Memory 802 may be implemented in the form of a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access Memory (Random Access Memory, RAM). The memory 802 may store an operating system and other application programs, and when the technical solution provided in the embodiments of the present application is implemented by software or firmware, relevant program codes are stored in the memory 802, and the processor 801 invokes a method for executing live scene interaction tracing in the embodiments of the present application;
An input/output interface 803 for implementing information input and output;
the communication interface 804 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g., USB, network cable, etc.), or may implement communication in a wireless manner (e.g., mobile network, WIFI, bluetooth, etc.);
a bus 805 that transfers information between the various components of the device (e.g., the processor 801, the memory 802, the input/output interface 803, and the communication interface 804);
wherein the processor 801, the memory 802, the input/output interface 803, and the communication interface 804 implement communication connection between each other inside the device through a bus 805.
The embodiment of the application also provides a storage medium, which is a computer readable storage medium and is used for computer readable storage, the storage medium stores one or more programs, and the one or more programs can be executed by one or more processors so as to realize the method for interactive tracing of the live broadcast scene.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and as those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the solutions shown in fig. 1-4 and fig. 8-9 are not limiting to embodiments of the present application and may include more or fewer steps than shown, or may combine certain steps, or different steps.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
Preferred embodiments of the present application are described above with reference to the accompanying drawings, and thus do not limit the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (10)

1. A method for interactive traceability of live scenes, the method comprising:
acquiring live phone operation clauses layer by layer from a preset phone operation tree, and broadcasting the live phone operation clauses in a live scene; wherein the phone tree comprises a plurality of layers of nodes, and the nodes of each layer correspond to the phone clause;
if the interaction information exists in the live broadcast scene is detected, broadcasting of the live broadcast speech operation clause is interrupted, a reply speech operation corresponding to the interaction information is broadcasted, the interruption times are recorded, and based on the node corresponding to the live broadcast speech operation clause, an interruption node is obtained;
performing medium-order traversal on the telephone tree based on the breaking node to obtain a target traversal path sequence between the root node of the telephone tree and the breaking node;
taking the breaking times as an index, and storing a live conversation sentence corresponding to the root node and the target traversal path sequence in a first linear table;
if the fact that the broadcasting of the reply voice operation is finished is detected, reading the last breaking times as an index through reverse order, and reading out a live voice operation clause corresponding to the root node and the target traversal path sequence from the first linear table;
Inquiring the speaking tree based on the live speaking clause corresponding to the root node, and traversing the speaking tree based on the target traversing path sequence to obtain a continuous reporting node;
and continuously broadcasting the live conversation clause in the conversation tree in the live broadcasting scene by taking the live conversation clause corresponding to the continuous report node as a start.
2. The method of claim 1, wherein information of a plurality of phone trees of a same second phone category is stored in a second linear table, the information including a live phone clause of a root node of the phone tree and a head pointer of the root node;
the step of inquiring the phone operation tree based on the live phone operation clause corresponding to the root node comprises the following steps:
determining the second conversation category based on the live conversation clause corresponding to the root node;
reading the second linear table corresponding to the second conversation category by taking the live conversation clause corresponding to the root node as an index to obtain a head pointer of the root node;
and inquiring the voice operation tree based on the head pointer of the root node.
3. The method for interactive traceability of live scenes according to claim 1, wherein said live-talk clause comprises a plurality of small clauses, each of said small clauses is stored in a linear linked list in sequence, and said nodes of each layer store said linear linked list to which said live-talk clause corresponds;
The step of continuing to broadcast the live conversation clause in the conversation tree in the live broadcasting scene by taking the live conversation clause corresponding to the continuous report node as a start, comprising the following steps:
obtaining a broken small clause broken in the live conversation process clause;
searching the linear linked list corresponding to the follow-up node based on the breaking small clause to obtain a follow-up small clause;
and starting with the continuous report small clause, and continuously broadcasting the live conversation clause in the conversation tree in the live broadcasting scene.
4. The method of claim 1, wherein if it is detected that the live scene has the interaction information, interrupting broadcasting the live speaking clause, broadcasting a reply speaking corresponding to the interaction information, comprising:
when the number of the viewers is smaller than or equal to the first number, detecting keywords of barrage information in the live broadcast scene, and broadcasting the answer speech according to the keywords of the barrage information;
when the number of the watched persons is larger than the first number and smaller than or equal to the second number, bullet screen information in the live broadcasting scene is detected, and the answering operation is broadcasted according to the bullet screen information; wherein the first number is less than the second number;
When the number of the watched persons is larger than the second number, bullet screen information in the live broadcasting scene is detected, and if the bullet screen information triggers a preset keyword mark, the answering operation is broadcasted according to the bullet screen information; detecting bullet screen information in the live broadcasting scene at intervals of a first period when the number of viewers is greater than or equal to the third number and less than or equal to the fourth number, and broadcasting the reply phone according to the bullet screen information if the bullet screen information triggers a preset keyword mark; wherein the third number is greater than the second number, the third number being less than the fourth number;
when the number of the viewers is larger than the fourth number, taking the question with the largest questioning frequency in the preset time as the interaction information;
when the number of persons entering the live broadcast scene in the second period is larger than a preset threshold value of the number of persons entering the live broadcast scene, if the currently broadcast product is a non-hot product, broadcasting a welfare and a conversation corresponding to the hot complaint;
when the number of the entering persons entering the live broadcasting scene in the second period is larger than a preset threshold value of the number of the entering persons, if the currently broadcasted item is a hot item, broadcasting a welfare operation and continuing to broadcast a conversation operation corresponding to the hot item.
5. The method of live scene interaction traceback of any of claims 1 to 4, wherein prior to said obtaining live conversation phrases layer by layer from a preset conversation tree, the method further comprises generating the conversation tree, the generating the conversation tree comprising:
acquiring a live phone with a first phone category;
the live phone operation is divided according to a first preset symbol contained in the live phone operation, and a plurality of live phone operation divided sentences are obtained;
and generating the nodes of each layer by the live conversation clause based on the sequence of each live conversation clause in a plurality of live conversation clauses, so as to obtain the conversation tree with the first conversation category.
6. The method for interactive traceability of live scenes according to claim 5, wherein said obtaining live speech with a first speech category comprises:
determining the category of the required conversation extraction, and obtaining the first conversation category;
determining a speaking index value according to the first speaking category;
and acquiring the live phone from a phone library according to the phone index value.
7. The method of live scene interaction traceback of claim 6, wherein determining a conversation index value from the first conversation category comprises:
Determining the first word pinyin initial letter of the first conversation category;
acquiring an ASCII code corresponding to the initial letter of the first word pinyin to obtain a target ASCII code;
and obtaining the speaking index value according to the remainder of the modular operation of the target ASCII code and the preset letter number.
8. A device for interactive traceability of live scenes, the device comprising:
the first broadcasting module is used for acquiring live conversation clauses layer by layer from a preset conversation tree and broadcasting the live conversation clauses in a live broadcasting scene; wherein the phone tree comprises a plurality of layers of nodes, and the nodes of each layer correspond to the phone clause;
the breaking module is used for breaking the broadcasting of the live broadcasting operation clause if the interaction information exists in the live broadcasting scene, broadcasting the reply operation corresponding to the interaction information, recording breaking times, and obtaining breaking nodes based on the nodes corresponding to the live broadcasting operation clause;
the first traversing module is used for performing medium-order traversing on the telephone tree based on the breaking node to obtain a target traversing path sequence between the root node of the telephone tree and the breaking node;
the storage module is used for storing the live conversation clause corresponding to the root node and the target traversal path sequence in a first linear table by taking the breaking times as indexes;
The reading module is used for reading the last interruption times as indexes through reverse order if the fact that the reply call operation is broadcasted is detected to be ended, and reading out a live call operation clause corresponding to the root node and the target traversal path sequence from the first linear table;
the second traversing module is used for inquiring the conversation tree based on the direct conversation clause corresponding to the root node, and traversing the conversation tree based on the target traversing path sequence to obtain a continuous report node;
and the second broadcasting module is used for continuously broadcasting the live conversation clause in the conversation tree in the live broadcasting scene by taking the live conversation clause corresponding to the continuous broadcasting node as a start.
9. A computer device comprising a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for enabling a connected communication between the processor and the memory, the program when executed by the processor implementing the steps of the method according to any of claims 1 to 7.
10. A storage medium, which is a computer-readable storage medium, for computer-readable storage, characterized in that the storage medium stores one or more programs executable by one or more processors to implement the steps of the method of any one of claims 1 to 7.
CN202311668072.9A 2023-12-07 2023-12-07 Live scene interactive tracing method and device, computer equipment and storage medium Active CN117376652B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311668072.9A CN117376652B (en) 2023-12-07 2023-12-07 Live scene interactive tracing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311668072.9A CN117376652B (en) 2023-12-07 2023-12-07 Live scene interactive tracing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117376652A CN117376652A (en) 2024-01-09
CN117376652B true CN117376652B (en) 2024-04-09

Family

ID=89400645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311668072.9A Active CN117376652B (en) 2023-12-07 2023-12-07 Live scene interactive tracing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117376652B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423363A (en) * 2017-06-22 2017-12-01 百度在线网络技术(北京)有限公司 Art generation method, device, equipment and storage medium based on artificial intelligence
CN111346376A (en) * 2020-02-25 2020-06-30 腾讯科技(深圳)有限公司 Interaction method and device based on multimedia resources, electronic equipment and storage medium
CN111434118A (en) * 2017-11-10 2020-07-17 三星电子株式会社 Apparatus and method for generating user interest information
CN113727121A (en) * 2020-12-18 2021-11-30 北京沃东天骏信息技术有限公司 Network live broadcast method and device, electronic equipment and computer readable medium
CN113784167A (en) * 2021-10-11 2021-12-10 福建天晴数码有限公司 3D rendering-based interactive video making and playing method and terminal
CN115002497A (en) * 2022-05-27 2022-09-02 上海哔哩哔哩科技有限公司 Live broadcast source returning scheduling method and system and source returning server
CN115348458A (en) * 2022-05-16 2022-11-15 阿里巴巴(中国)有限公司 Virtual live broadcast control method and system
WO2023169252A1 (en) * 2022-03-08 2023-09-14 腾讯科技(深圳)有限公司 Multimedia content processing method and apparatus, device, program product, and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10271077B2 (en) * 2017-07-03 2019-04-23 At&T Intellectual Property I, L.P. Synchronizing and dynamic chaining of a transport layer network service for live content broadcasting

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423363A (en) * 2017-06-22 2017-12-01 百度在线网络技术(北京)有限公司 Art generation method, device, equipment and storage medium based on artificial intelligence
CN111434118A (en) * 2017-11-10 2020-07-17 三星电子株式会社 Apparatus and method for generating user interest information
CN111346376A (en) * 2020-02-25 2020-06-30 腾讯科技(深圳)有限公司 Interaction method and device based on multimedia resources, electronic equipment and storage medium
CN113727121A (en) * 2020-12-18 2021-11-30 北京沃东天骏信息技术有限公司 Network live broadcast method and device, electronic equipment and computer readable medium
CN113784167A (en) * 2021-10-11 2021-12-10 福建天晴数码有限公司 3D rendering-based interactive video making and playing method and terminal
WO2023169252A1 (en) * 2022-03-08 2023-09-14 腾讯科技(深圳)有限公司 Multimedia content processing method and apparatus, device, program product, and storage medium
CN115348458A (en) * 2022-05-16 2022-11-15 阿里巴巴(中国)有限公司 Virtual live broadcast control method and system
CN115002497A (en) * 2022-05-27 2022-09-02 上海哔哩哔哩科技有限公司 Live broadcast source returning scheduling method and system and source returning server

Also Published As

Publication number Publication date
CN117376652A (en) 2024-01-09

Similar Documents

Publication Publication Date Title
US20200301954A1 (en) Reply information obtaining method and apparatus
CN110046236B (en) Unstructured data retrieval method and device
CN109726274B (en) Question generation method, device and storage medium
US20160180237A1 (en) Managing a question and answer system
KR20180107147A (en) Multi-variable search user interface
CN107798123B (en) Knowledge base and establishing, modifying and intelligent question and answer methods, devices and equipment thereof
CN111259173B (en) Search information recommendation method and device
CN111046225B (en) Audio resource processing method, device, equipment and storage medium
US10313403B2 (en) Systems and methods for virtual interaction
CN110377745B (en) Information processing method, information retrieval device and server
KR20220006491A (en) Method, apparatus, electronic device, storage medium and computer program for generating comment subtitle
CN113157727A (en) Method, apparatus and storage medium for providing recall result
CN110245357B (en) Main entity identification method and device
CN116882372A (en) Text generation method, device, electronic equipment and storage medium
CN111737408A (en) Dialogue method and equipment based on script and electronic equipment
CN117376652B (en) Live scene interactive tracing method and device, computer equipment and storage medium
JP2019121060A (en) Generation program, generation method and information processing apparatus
CN109033082B (en) Learning training method and device of semantic model and computer readable storage medium
CN110147358B (en) Construction method and construction system of automatic question-answering knowledge base
JP4279883B2 (en) Conversation control system, conversation control method, program, and recording medium recording program
CN114969250A (en) Man-machine conversation generation method and device, electronic equipment and storage medium
CN109284364B (en) Interactive vocabulary updating method and device for voice microphone-connecting interaction
CN112528046A (en) New knowledge graph construction method and device and information retrieval method and device
CN113051375A (en) Question-answering data processing method and device based on question-answering equipment
CN115878849B (en) Video tag association method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant