CN115309385A - Visual IVVR editor based on table, editing method, equipment and medium - Google Patents

Visual IVVR editor based on table, editing method, equipment and medium Download PDF

Info

Publication number
CN115309385A
CN115309385A CN202210945257.9A CN202210945257A CN115309385A CN 115309385 A CN115309385 A CN 115309385A CN 202210945257 A CN202210945257 A CN 202210945257A CN 115309385 A CN115309385 A CN 115309385A
Authority
CN
China
Prior art keywords
ivvr
module
node
user
editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210945257.9A
Other languages
Chinese (zh)
Inventor
罗岚
李韩
张晶晶
庞文刚
乔治
邹西山
李雪欣
戈翔
陈星�
罗志亮
张杰辉
温雪阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unicom WO Music and Culture Co Ltd
Original Assignee
China Unicom WO Music and Culture Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unicom WO Music and Culture Co Ltd filed Critical China Unicom WO Music and Culture Co Ltd
Priority to CN202210945257.9A priority Critical patent/CN115309385A/en
Publication of CN115309385A publication Critical patent/CN115309385A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/34Graphical or visual programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention relates to a table-based visual IVVR editor, an editing method, equipment and a medium. The table-based visual IVVR editor comprises an input module, an analysis module, a capability engine module and a storage module; the input module is connected with the storage module and is used for storing data input by a user into the storage module; the analysis module is connected with the storage module and used for reading the data in the storage module and executing analysis; the capability engine module is connected with the analysis module and used for providing corresponding audio and video synthesis capability and intelligent voice recognition capability. According to the IVVR script editing method, the service flow data are edited into the IVVR script in a table form through the self-development graphical component, so that the learning cost of an IVVR editor is reduced, the IVVR script can be quickly and conveniently edited in the table form, the IVVR editing process is simplified, and the working efficiency is improved.

Description

Table-based visual IVVR editor, editing method, device and medium
Technical Field
The invention relates to the technical field of computers, in particular to a visual IVVR editor based on a table, an editing method, equipment and a medium.
Background
With the popularization of 4G/5G and Internet videos, video call centers gradually enter the field of call centers, and the demand of IVVR services is increased. IVVR (Interactive Voice and Video Response) is the Interactive Voice and Video Response. IVVR is a new wireless voice and video response value-added service, and a mobile phone user obtains required information or participates in an interactive service by dialing a designated number. IVVR stands out the "interactive" type of function with the help of the features of video and voice.
With the development of 5G video technology, the demand of IVVR services is increasing. According to the characteristics of strong interaction, simple and convenient operation and low requirements of mobile phone terminals of IVVR services, a series of commercial services such as video on demand live broadcast, video downloading, video on demand delivery, video recording, video outbound, video chat, video real-time interaction, video monitoring and the like can be provided for users on the basis of strong functions. For example:
video spot sending: when a user browses related videos through the video IVR system, mobile phone numbers of other 3G users can be input according to system prompts, the current video clip is sent to the opposite side, and the system enables the opposite side user to watch videos in an outbound mode or a short message notification sending mode; the system also supports the addition of a video recorded by the calling party prior to the ordered program.
Video conference: the user can directly dial the video telephone service number, and a plurality of video telephones are summoned to the video conference in a video interaction mode, so that the system provides a visual, convenient and humanized video conference summoning mode.
The video call center: the video call is introduced into the call center application, the video call center provides a new communication channel based on the video between the seat representative and the user, the use experience of the user is improved, and enterprises can also show more contents through the richness of the video expression contents. The user can conveniently obtain more enterprise and product information.
However, in existing call centers, there are two main ways of IVVR editing. The first is that the user writes the code manually, which is difficult, too high in learning cost, low in efficiency and not suitable for large-scale commercial use. Implemented using a complex canvas; the second type is writing by canvas, which is improved to a certain extent compared with the first type of learning cost and working efficiency, but still has a certain complexity, the learning cost is still higher, and the efficiency is still lower.
Disclosure of Invention
Aiming at the problems, the invention provides a table-based visual IVVR editor and an editing method, which adopt a form of a table convenient for business personnel to understand to edit the business, are more suitable for practical application scenes, are convenient for user operation, reduce the working cost and improve the working efficiency.
The invention provides a table-based visual IVVR editor, which is characterized by comprising:
the device comprises an input module, an analysis module, a capability engine module and a storage module;
the input module is connected with the storage module and is used for storing data input by a user into the storage module;
the analysis module is connected with the storage module and used for reading the data in the storage module and executing analysis;
the capability engine module is connected with the analysis module and used for providing corresponding audio and video synthesis capability and intelligent voice recognition capability.
Further, the input module is a table-based graphical component, the table includes a plurality of rows, each row of the table includes a node, each row of the table includes a plurality of cells, and the cells are used for editing attribute information of the nodes; the cells include, but are not limited to, a node name cell, a playing content cell, and a conditional exit node cell.
Further, the conditional egress node cell can be decomposed into a number of egress node cells, which are used to represent different egress nodes.
Further, the input module is also used for packaging data input by the user into the IVVR script.
And furthermore, the analysis module is used for analyzing and executing the IVVR script, and comprises the steps of retrieving the current node, calling the capability engine module to play the corresponding audio and video content according to the content set by the current node script, and responding to the user according to the script logic.
The invention also provides a table-based visual IVVR editing method which is characterized by comprising the following steps:
newly building a blank service, and selecting the flow editing of the service;
adding a record row in the flow table;
editing the node name through the node name cell;
playing the content type, audio, video or audio and video through the playing content cell editing node, and selecting corresponding content;
editing the jumping-out type of the flow node through the condition exit node cell, jumping by default, jumping according to a user key, and intelligently jumping according to the words spoken by the user;
selecting a jumping node to jump out;
and judging whether the process table is edited completely, if not, adding nodes again, and otherwise, finishing the editing.
The invention has the following beneficial effects: the table-based visual IVVR editor provided by the invention abandons the complex structure of the traditional canvas, and can finish editing only by using a table. Through self-research graphical components, the form of the table is adopted, the service flow data is edited into the IVVR script, the learning cost of the IVVR editor is reduced, a user can read, understand and edit conveniently, the IVVR script can be edited in the form of the table quickly and conveniently, the IVVR editing flow is simplified, and the working efficiency is improved.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings used in the description of the embodiments will be briefly described as follows:
fig. 1 shows a table-based visual IVVR editor diagram in accordance with a first embodiment of the present invention. Fig. 2 shows a flowchart of a table-based visual IVVR editing method according to a first embodiment of the present invention.
Fig. 3 (a) -3 (G) show the page display process and the result in the process of editing for the flow by the editor of the present invention.
Fig. 4 shows a diagram of an access control computer-readable storage medium structure of a removable storage device of the present invention.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be carried into practice or applied to various other specific embodiments, and various modifications and changes may be made in the details within the description and the drawings without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be further noted that the drawings provided in the following embodiments are only schematic illustrations of the basic concepts of the present disclosure, and the drawings only show the components related to the present disclosure rather than the numbers, shapes and dimensions of the components in actual implementation, and the types, the numbers and the proportions of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
In the following description, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as implying relative importance.
The following description provides embodiments of the invention, which may be combined with or substituted for various embodiments, and the invention is thus to be construed as embracing all possible combinations of the same and/or different embodiments described. Thus, if one embodiment includes features a, B, C and another embodiment includes features B, D, then the invention should also be construed as including embodiments that include all other possible combinations of one or more of a, B, C, D, although such embodiments may not be explicitly recited in the following text.
Example one
Fig. 1 shows a table-based visual IVVR editor schematic of a first embodiment of the present invention.
As shown in fig. 1, a table-based visual IVVR editor of the present invention comprises:
an input module 102, a parsing module 104, a capability engine module 106, and a storage module 108.
The input module 102 is a table-based graphical component, where the table includes a plurality of rows, each row of the table includes a node, and each row of the table includes a plurality of cells, and the cells are used for editing attribute information of the node; the cells include, but are not limited to, a node name cell, a play content cell, and a conditional exit node cell.
Further, the conditional egress node cell can be decomposed into a number of egress node cells, which are used to represent different egress nodes.
Further, the input module 102 is further configured to package data input by a user into an IVVR script; the input module 102 is mainly used for graphical display, supports input of texts, pictures and files, and supports page input and file import form editing of the IVVR script. An IVVR script refers to code on which the IVVR process executes.
The input module 102 is connected to the parsing module 104, and is configured to store data input by a user in the storage module 108.
Further, the parsing module 104 is configured to parse and execute the IVVR script, and includes retrieving a current node, invoking the capability engine module to play corresponding audio and video content according to content set by the current node script, responding to the user according to script logic, and responding to the user including performing logic judgment on a key operation or a voice reply of the user according to the script logic to control a jump of the program.
The parsing module 104 is connected to the capability engine module 106, and is configured to read data in the storage module 108 and execute the data.
The capability engine module 106 is configured to provide corresponding audio/video synthesis capability and intelligent speech recognition capability. The capability engine module 106 provides capabilities for the parsing module, primarily for the composition of audio and video content and intelligent intent recognition of user input audio.
Example two
Fig. 2 shows a flowchart of a table-based visual IVVR editing method according to a first embodiment of the present invention.
The invention also provides a table-based visual IVVR editing method which is characterized by comprising the following steps:
s1, a blank service is newly established.
S2, selecting the flow of the service for editing.
And S3, adding a record row in the process table.
The new record line process is a process of adding an IVVR node.
And S4, editing the node name through the node name cell.
Wherein, the name of the editing node is the name of the node.
And S5, playing the content type through the playing content cell editing node.
Wherein the playing content type comprises audio, video or audio-video.
Further, the step S5 specifically includes the following steps:
s5.1: the user selects whether to play the audio only, if so, the user continues to select the content to be played;
s5.2: the user selects whether the video is played only, if so, the user continues to select the content to be played;
s5.3: the user selects whether to play audio and video, and if so, continues to select content to play.
And S6, selecting corresponding content.
Further, the step S6 specifically includes the following steps:
s6.1: a user sets a default jump type node, namely a jump node during the node service processing;
s6.2: a user sets a key to jump, if the user presses '1', the user jumps to a certain node, and if the user presses '2', the user jumps to another node;
s6.3: the user sets intention skip, and the nodes skip to different nodes after intention identification according to different words spoken by the large-network client during use.
S7, editing the jumping-out type of the flow node through the condition exit node cell, jumping by default, jumping according to a user key, and intelligently jumping according to the words spoken by the user.
And S8, selecting the jumping-out jumping nodes.
And S9, judging whether the process table is edited completely, if not, jumping to the step S3, otherwise, jumping to the step S10.
And S10, finishing editing.
Fig. 3 (a) -3 (G) show the page display process and the result of the editor in the process of flow editing.
The table shown in FIG. 3 (A) has a total of 5 rows. And the first action header comprises a node name, process content, a jump condition + a jump node and an operation option. The second row starts as flow nodes, each row describing one flow node.
After the system accesses the service flow, first, a first node (as shown in fig. 3 (a), namely "open field white + question 1") is executed, after the flow content is played, the system jumps to a corresponding subsequent node (carries out question 2) according to the operation of the user (question 1 is rejected), and the flow is ended until the system jumps to a "null [ end ]" node.
The description of the nodes is expanded in detail with the second row of nodes (question 2):
(1) and adding nodes and setting node names.
The following "new addition" is selected, and the contents in fig. 3 (B) appear:
a user can input the node name, click and store the node name, add a new node and subsequently edit the process content; or alternatively
As shown in fig. 3 (C), the node name is input, the flow content is edited (specifically, the play content and the jump condition are set), and the process content is clicked and saved.
(2) The setting of the process content is for example to set audio, as shown in fig. 3 (D), fig. 3 (E), and fig. 3 (F), respectively, the audio sets TTS, the file, and the mixing three are single options.
TTS: the form is simple text-to-speech, the user directly inputs the text, and the system plays the corresponding pronunciation.
File: and playing the audio files uploaded to the system in advance.
Mixing: advanced use of TTS mixed with file forms, where variable pitch is the recognition of a particular text format. For example, 200, the phonetic reading of the number is "two zero", and the phonetic reading of the amount is "two hundred".
(3) Setting node exits
The jumping form in step S7 is divided into 3 types: key presses, tag recognition (e.g., voice recognition), and default jumps.
The 3 jump style triggers are shown in fig. 3 (G).
(a) And pressing keys, namely, in the conversation process, a user confirms the selection through the keys of the keyboard. And jumping to different nodes according to different instructions defined by the keys.
(b) Tag recognition (speech recognition): the label, refers to the system recognizing the user's intention to speak. During the interaction, the user answers a sentence in natural language, such as "I want to record video". The tag robot (the bearing unit of the tag capability) recognizes the user intention (for example), recognizes the natural language as a certain tag, and jumps to different nodes. For example, "your location? "natural language analysis, the intent of the analysis is to tag, e.g., with sentences and rules, with a" question address ".
And setting labels for the robot, such as Chinese and English labels, according to specific application scenes, such as user services, and directly jumping to Chinese nodes.
(c) Jump by default: after the node content is played, if the user does not make an action within the specified time or does not make a preset action of the service, the default skip is performed. Generally, this is the lowest priority case.
The invention has the following beneficial effects: the table-based visual IVVR editor provided by the invention abandons the complex structure of the traditional canvas, and can finish editing only by using a table. By self-development of the graphical component, the IVVR process is manufactured by using a table/table editor similar to the process description commonly used by users, so that the user can read, understand and edit the IVVR process conveniently, and the IVVR script can be edited quickly and conveniently in a table form. The learning cost is low, the efficiency is high, and the method is user-friendly.
The "module" and "unit" in the present specification refer to software and/or hardware capable of performing a specific function independently or in cooperation with other components, wherein the hardware may be, for example, an FPGA (Field-Programmable Gate Array), an IC (Integrated Circuit), or the like.
The invention also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method for table-based visual IVVR editing when executing the program. In the embodiment of the present invention, the processor is a control center of a computer system, and may be a processor of a physical machine or a processor of a virtual machine.
Fig. 4 is a schematic diagram of a computer-readable storage medium according to an embodiment of the present disclosure. As shown in fig. 4, a computer-readable storage medium 40, having non-transitory computer-readable instructions 41 stored thereon, in accordance with an embodiment of the present disclosure. The non-transitory computer readable instructions 41, when executed by a processor, perform all or some of the steps of the aforementioned access control method of the removable storage device of the embodiments of the present disclosure.
It should be noted that the computer readable medium of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The foregoing description is only a preferred embodiment of the invention and is not intended to limit the invention in any way, either in nature or in any way. Although the present invention has been described with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention. However, any simple modification, equivalent replacement, improvement and the like of the above embodiments according to the technical spirit of the present invention should be included in the protection scope of the present invention without departing from the spirit and principle of the present invention.

Claims (10)

1. A form-based visual IVVR editor, comprising:
the device comprises an input module, an analysis module, a capability engine module and a storage module;
the input module is connected with the storage module and is used for storing data input by a user into the storage module;
the analysis module is connected with the storage module and used for reading the data in the storage module and executing analysis;
the capability engine module is connected with the analysis module and used for providing corresponding audio and video synthesis capability and intelligent voice recognition capability.
2. The table-based visual IVVR editor of claim 1 wherein:
the input module is a table-based graphical component, the table comprises a plurality of rows, each row of the table comprises a node, each row of the table comprises a plurality of cells, and the cells are used for editing attribute information of the nodes.
3. The table-based visual IVVR editor of claim 2 wherein:
the cells comprise a node name cell, a playing content cell and a condition exit node cell.
4. The table-based visual IVVR editor of claim 3 wherein:
the conditional egress node cell can be decomposed into a number of egress node cells that are used to represent different egress nodes.
5. The table-based visual IVVR editor of claim 1 wherein:
the input module is also used for packaging data input by a user into an IVVR script.
6. The table-based visual IVVR editor of claim 5 wherein:
the analysis module is used for analyzing and executing the IVVR script, and comprises the steps of retrieving a current node, calling the capability engine module to play corresponding audio and video contents according to contents set by the current node script, and responding to a user according to script logic.
7. An editing method using the table-based visual IVVR editor of any one of claims 1-6 comprising the steps of:
s1, establishing a blank service;
s2, selecting a flow of a service for editing;
s3, adding a record row in the flow table;
s4, editing the node name through the node name cell;
s5, playing content types, audio, video or audio and video through the playing content cell editing node;
s6, selecting corresponding content;
s7, editing the jumping-out type of the flow node through the condition exit node cell, jumping by default, jumping according to a user key, and jumping intelligently according to the words spoken by the user;
s8, selecting a jumping-out jumping node;
s9, judging whether the process table is edited completely, if not, skipping to the step S3, otherwise, skipping to the step S10;
and S10, finishing editing.
8. The editing method of claim 7, wherein:
in the step S7, the skip type includes: default jumping, jumping according to user keys, intelligent jumping according to the words spoken by the user.
9. An electronic device, comprising: a memory configured to store one or more computer programs, and a processor coupled to the memory and configured to execute the one or more computer programs to cause the electronic device to perform the method of any of claims 7-8.
10. A non-transitory computer readable storage medium having stored thereon machine executable instructions which, when executed, cause a machine to perform the steps of the method of any one of claims 7-8.
CN202210945257.9A 2022-08-08 2022-08-08 Visual IVVR editor based on table, editing method, equipment and medium Pending CN115309385A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210945257.9A CN115309385A (en) 2022-08-08 2022-08-08 Visual IVVR editor based on table, editing method, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210945257.9A CN115309385A (en) 2022-08-08 2022-08-08 Visual IVVR editor based on table, editing method, equipment and medium

Publications (1)

Publication Number Publication Date
CN115309385A true CN115309385A (en) 2022-11-08

Family

ID=83860557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210945257.9A Pending CN115309385A (en) 2022-08-08 2022-08-08 Visual IVVR editor based on table, editing method, equipment and medium

Country Status (1)

Country Link
CN (1) CN115309385A (en)

Similar Documents

Publication Publication Date Title
US8374859B2 (en) Automatic answering device, automatic answering system, conversation scenario editing device, conversation server, and automatic answering method
US7406413B2 (en) Method and system for the processing of voice data and for the recognition of a language
US7577568B2 (en) Methods and system for creating voice files using a VoiceXML application
CN110751943A (en) Voice emotion recognition method and device and related equipment
CN107040452B (en) Information processing method and device and computer readable storage medium
CN110895940A (en) Intelligent voice interaction method and device
CN103905644A (en) Generating method and equipment of mobile terminal call interface
CN111405381A (en) Online video playing method, electronic device and computer readable storage medium
CN107342088A (en) A kind of conversion method of acoustic information, device and equipment
CN110032355B (en) Voice playing method and device, terminal equipment and computer storage medium
US20040002868A1 (en) Method and system for the processing of voice data and the classification of calls
JP2005237009A (en) Method and system for moving among applications
CN111462726B (en) Method, device, equipment and medium for answering out call
US20040042591A1 (en) Method and system for the processing of voice information
CN110650250A (en) Method, system, device and storage medium for processing voice conversation
CN110491367B (en) Voice conversion method and device of smart television
CN111563182A (en) Voice conference record storage processing method and device
CN115982331A (en) Information interaction method, device and equipment in session scene
CN114079695A (en) Method, device and storage medium for recording voice call content
US20040006464A1 (en) Method and system for the processing of voice data by means of voice recognition and frequency analysis
CN115309385A (en) Visual IVVR editor based on table, editing method, equipment and medium
US7343288B2 (en) Method and system for the processing and storing of voice information and corresponding timeline information
US11062693B1 (en) Silence calculator
CN107767872A (en) Audio recognition method, terminal device and storage medium
CN112562733A (en) Media data processing method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication