CN111142833B - Method and system for developing voice interaction product based on contextual model - Google Patents

Method and system for developing voice interaction product based on contextual model Download PDF

Info

Publication number
CN111142833B
CN111142833B CN201911365062.1A CN201911365062A CN111142833B CN 111142833 B CN111142833 B CN 111142833B CN 201911365062 A CN201911365062 A CN 201911365062A CN 111142833 B CN111142833 B CN 111142833B
Authority
CN
China
Prior art keywords
contextual model
product
configuration
voice interaction
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911365062.1A
Other languages
Chinese (zh)
Other versions
CN111142833A (en
Inventor
周莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sipic Technology Co Ltd
Original Assignee
Sipic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sipic Technology Co Ltd filed Critical Sipic Technology Co Ltd
Priority to CN201911365062.1A priority Critical patent/CN111142833B/en
Publication of CN111142833A publication Critical patent/CN111142833A/en
Application granted granted Critical
Publication of CN111142833B publication Critical patent/CN111142833B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The invention discloses a method and a system for developing a voice interaction product based on a contextual model, wherein the method comprises the steps of responding to a command for creating the product, obtaining a voice interaction configuration item opened for the product to be created, and displaying the voice interaction configuration item on a user interface, wherein the voice interaction configuration item comprises a contextual model configuration option; responding to a request instruction for configuring the contextual model input on the contextual model configuration option of the user interface, and acquiring parameter information associated with the contextual model configuration option to display on the user interface; and responding to the received instruction for releasing the created product, acquiring first configuration information associated with the contextual model, and generating a custom contextual model storage associated with the created product. According to the scheme disclosed by the invention, the user-defined contextual model can be configured for the voice product according to the requirement in the development process of the voice product so as to better meet the requirement of a user, and the method is realized based on a visual interface and is very friendly to a product developer.

Description

Method and system for developing voice interaction product based on contextual model
Technical Field
The invention relates to the technical field of voice interaction, in particular to a method and a system for developing a voice interaction product based on a contextual model.
Background
In recent years, with the development of intelligent voice technology, products based on voice interaction are developed vigorously. However, the existing voice interaction products have a single mode and cannot meet the diversified personalized requirements of users.
Disclosure of Invention
In order to be able to define the configuration of a product according to different personalized requirements of the product usage scenario, so that an end user can select a product mode matched with the scenario mode for interaction according to the requirements, the inventor thinks of providing a product developer with a scheme for product development based on the scenario mode in the product development stage.
According to a first aspect of the present invention, there is provided a method for developing a voice interaction product based on a contextual model, comprising
Responding to a command of creating a product, and acquiring a voice interaction configuration item which is open for the product to be created to be displayed on a user interface, wherein the voice interaction configuration item comprises a contextual model configuration option;
responding to a request instruction for configuring the contextual model input on the contextual model configuration option of the user interface, and acquiring parameter information associated with the contextual model configuration option to display on the user interface;
and responding to the received instruction for releasing the created product, acquiring first configuration information associated with the contextual model, and generating a custom contextual model storage associated with the created product.
According to a second aspect of the present invention, there is provided a system for developing a voice interaction product based on a contextual model, comprising:
the product configuration module is used for responding to a command of creating a product, and acquiring a voice interaction configuration item which is open for the product to be created to be displayed on a user interface, wherein the voice interaction configuration item comprises a contextual model configuration option;
the contextual model configuration module is used for responding to a contextual model configuration request instruction input on the contextual model configuration option of the user interface, and acquiring parameter information related to the contextual model configuration option to be displayed on the user interface; and
and the issuing module is used for responding to the received instruction for issuing the created product, acquiring first configuration information associated with the contextual model and generating a custom contextual model storage associated with the created product.
According to a third aspect of the present invention, there is provided an electronic apparatus comprising: the computer-readable medium includes at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the steps of the above-described method.
According to a fourth aspect of the invention, a storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
According to the scheme of the embodiment of the invention, a product developer can customize the contextual model according to different use scenes of the voice product on the voice development platform according to own requirements, and carry out different configurations aiming at different contextual models, so that the requirements of terminal users of the voice product can be met to the greatest extent, the developed voice product is more competitive in the market, and greater profits are brought to enterprises. Meanwhile, the scheme of the embodiment of the invention also provides a configuration approach of the contextual model for the product developer through the user interface, has the advantages of flexibility and easy use in the use level, can reduce the learning cost of the developer, and helps the developer to quickly realize landing of the voice product.
Drawings
FIG. 1 is a flowchart of a method for developing a voice interaction product based on contextual models according to an embodiment of the present invention;
FIG. 2 is a block diagram of a system for developing a voice interaction product based on contextual models according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
As used in this disclosure, "module," "device," "system," and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, or software in execution. In particular, for example, an element may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. Also, an application or script running on a server, or a server, may be an element. One or more elements may be in a process and/or thread of execution and an element may be localized on one computer and/or distributed between two or more computers and may be operated by various computer-readable media. The elements may also communicate by way of local and/or remote processes based on a signal having one or more data packets, e.g., from a data packet interacting with another element in a local system, distributed system, and/or across a network in the internet with other systems by way of the signal.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional identical elements in the process, method, article, or apparatus that comprises the element.
The scheme for developing the voice product related to the embodiment of the invention can be applied to any intelligent equipment with an intelligent voice interaction function, and can realize personalized configuration of the voice product borne by the intelligent equipment, such as a mobile phone, a watch, an earphone, a Personal Computer (PC) and the like, but the application range of the invention is not limited to the scheme. According to the scheme provided by the embodiment of the invention, a plurality of contextual models can be configured for the voice product according to requirements, the implementation is simple and efficient, and different voice interaction scenes can be well adapted.
The present invention will be described in further detail with reference to the accompanying drawings.
Fig. 1 schematically shows a method flow of a method for developing a voice interaction product based on a contextual model according to an embodiment of the present invention, and as shown in fig. 1, the method of the present embodiment includes the following steps:
step S101: and responding to an instruction of creating a product, and acquiring a voice interaction configuration item which is open to the product to be created to be displayed on a user interface, wherein the voice interaction configuration item comprises a contextual model configuration option.
In the development process of the voice product, after a background developer performs function development, a parameter configuration path and an interface are provided for a product developer through a voice development platform, such as a deliberate DUI voice development platform, so that the product developer can create and configure the voice product according to the requirements of the product developer. When a voice product developer develops a product, the voice product developer firstly logs in a voice development platform to create a voice product required to be developed. The voice product development options provided by the traditional voice development platform are generally basic voice capability and general configuration parameters, the basic voice capability refers to development and configuration of voice processing modules such as voice recognition, semantic analysis and dialogue management, the general configuration parameters are product configuration parameters such as voice skills, TTS (text to text) reply style and reply content which are open to product developers, and the configuration is general in the traditional mode without considering application scenarios of products and requirements of users under different scenarios. In the scheme of the embodiment of the invention, on the traditional voice interaction configuration item for creating a product, a contextual model configuration option is also set, so that a product developer can design and configure a single or a plurality of self-defined contextual models according to requirements, and the contextual model configuration option can be presented on an interface in a mode of selecting a frame or a button for configuring the contextual model or not.
Step S102: and responding to a request instruction for configuring the contextual model input on the contextual model configuration option of the user interface, and acquiring parameter information associated with the contextual model configuration option to display on the user interface. When a product developer selects to configure the contextual model, for example, when a selection frame or a button for configuring the contextual model is clicked, a user interface for selecting a newly added contextual model is output, and parameter information associated with the contextual model configuration option is displayed on the user interface for the product developer to select and configure. Illustratively, the parameter information associated with the profile configuration options includes profile identification information, and optional parameters and parameter values associated with the custom profile, and the optional parameters associated with the custom profile may be any combination including one or more of voice skills, pronouncing man, and TTS cast content. Thus, a product developer can input information of the contextual model needing to be newly configured through the user interface according to requirements, for example, the input identification information is that the name of the contextual model is the old man mode, the ID is 001, the selected voice skill is the drama column, the selected speaker is the Guo de, and the playing content of the Guo de style is selected.
Step S103: and responding to the received instruction for releasing the created product, acquiring first configuration information associated with the contextual model, and generating a custom contextual model storage associated with the created product. When the product developer completes configuration and clicks release, the configuration information on the user interface is acquired and stored in association with the created product (for example, association binding is performed through product ID). Different from the conventional development manner, in the embodiment of the present invention, the first configuration information associated with each newly added profile configured in step S102 is also obtained and stored in association with the created product, so that each product has a custom profile. The acquired first configuration information associated with the contextual model exemplarily comprises acquired contextual model identification information and configuration information of optional parameters and parameter values under the contextual model identification information. In other embodiments, the development requirements of the voice product may not be limited to the voice skills, the speaker and the TTS broadcast content, and may include others, which are not limited in the embodiments of the present invention.
Illustratively, the general existence of smart sound boxes becoming small household appliances permeates the daily life space of people, more and more families use smart sound boxes, people in different ages in families have different requirements on resources and functions of the smart sound boxes, children tend to listen to resources such as stories and children songs, adults have higher requirements on controlling smart homes in terms of functions, and old people prefer to have rich resources such as operas and square music, and meanwhile, as a lot of resources need to be paid, a VIP mode can be added, and the VIP mode can be purchased to use the better and richer resources, so that developers of smart sound box products can configure single or multiple custom contextual modes associated with created products according to requirements in the development process of voice products. In the using process of the terminal user, the contextual model is set only through the terminal application, and the configuration information associated with the corresponding contextual model can be called through the cloud server to perform voice response.
In actual scene application, the voice product can be used under different scenes or by crowds of different age groups, and through the scheme, a product developer can configure a plurality of contextual models according to requirements, and an end user can freely switch through a matched APP and also can locate the corresponding contextual model through acquiring the voiceprint of the user, so that the requirements of the end user of the voice product can be met to the greatest extent, the developed voice product is more competitive in the market, and more profits are brought to enterprises. Meanwhile, the scheme provides a configuration mode of the contextual model of the voice product through the voice development platform with the user interface, the use level has the advantages of flexibility and easiness in use, the learning cost of a developer can be reduced, and the developer is helped to quickly realize landing of the voice product.
Fig. 2 schematically shows a framework structure of a system for developing a voice interaction product based on a contextual model according to an embodiment of the present invention, as shown in fig. 2, the system includes a product configuration module 20, configured to, in response to an instruction to create a product, obtain a voice interaction configuration item open to the product to be created, and display the voice interaction configuration item on a user interface, where the voice interaction configuration item includes a contextual model configuration option;
the contextual model configuration module 21 is configured to, in response to a contextual model configuration request instruction input on a contextual model configuration option of the user interface, obtain parameter information associated with the contextual model configuration option and display the parameter information on the user interface; and
the publishing module 22 is configured to, in response to a received instruction for publishing a created product, obtain first configuration information associated with a scenario mode, and generate a custom scenario mode storage associated with the created product, where the first configuration information associated with the scenario mode includes scenario mode identification information and configuration information for optional parameters and parameter values under the scenario mode identification information.
The parameter information associated with the contextual model configuration options comprises contextual model identification information and optional parameters and parameters associated with the custom contextual model, and the optional parameters associated with the custom contextual model comprise one or any combination of more than two of voice skills, pronouncing persons and TTS (text to speech) broadcast contents. The specific implementation process of each module involved in the system according to the embodiment of the present invention may refer to the description of the method part, and is not described herein again.
Through the system provided by the embodiment of the invention, a product developer can customize a single or a plurality of contextual models on a voice development platform, such as a thoughtful DUI platform, according to different use scenes of a voice product according to the self requirement, and can configure different voice skills, speakers and the like aiming at different contextual models. Moreover, the scheme provides a contextual model configuration scheme through a visual interface, operation is easier for a product developer, and learning difficulty is reduced. It should be noted that the optional parameters and parameter values displayed on the user interface are parameters provided by the cloud service developed by the background developer and open to the product developer, and all the optional parameter values configured for each parameter, and the optional parameters are open to the product developer by the background developer through the voice development platform, i.e., the thoughtful voice development platform, so that the optional parameters and the parameter values can be configured and selected according to the requirements.
In some embodiments, the present invention provides a non-transitory computer-readable storage medium, in which one or more programs including executable instructions are stored, where the executable instructions can be read and executed by an electronic device (including but not limited to a computer, a server, or a network device, etc.) to perform the method for developing a voice interaction product based on a contextual model of the present invention.
In some embodiments, the present invention further provides a computer program product, the computer program product includes a computer program stored on a non-volatile computer-readable storage medium, the computer program includes program instructions, when the program instructions are executed by a computer, the computer executes the method for developing the voice interaction product based on the contextual model.
In some embodiments, an embodiment of the present invention further provides an electronic device, which includes: at least one processor, and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the method for developing the voice interaction product based on the contextual model.
In some embodiments, the present invention further provides a storage medium, on which a computer program is stored, where the program is capable of executing the method for developing a voice interaction product based on contextual models when executed by a processor.
The system for developing voice interaction products based on contextual models according to the embodiments of the present invention can be used to execute the method for developing voice interaction products based on contextual models according to the embodiments of the present invention, and accordingly achieve the technical effects achieved by the method for developing voice interaction products based on contextual models according to the embodiments of the present invention, and are not described herein again. In the embodiment of the present invention, the relevant functional module may be implemented by a hardware processor (hardware processor).
Fig. 3 is a schematic hardware structure diagram of an electronic device for executing a method for developing a voice interaction product based on a contextual model according to another embodiment of the present application, and as shown in fig. 3, the device includes:
one or more processors 510 and memory 520, with one processor 510 being an example in fig. 3.
The apparatus for performing the method of developing a voice interaction product based on a contextual model may further include: an input device 530 and an output device 540.
The processor 510, the memory 520, the input device 530, and the output device 540 may be connected by a bus or other means, such as the bus connection in fig. 3.
The memory 520, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the method for developing a voice interaction product based on a contextual model in the embodiments of the present application. The processor 510 executes various functional applications of the server and data processing by running the non-volatile software programs, instructions and modules stored in the memory 520, so as to implement the method for developing a voice interaction product based on contextual models in the above method embodiments.
The memory 520 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of a system for developing a voice interaction product based on a contextual model, and the like. Further, the memory 520 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 520 optionally includes memory located remotely from processor 510, and these remote memories may be connected over a network to a system for developing voice interaction products based on contextual models. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 530 may receive input numeric or character information and generate signals related to user settings and function control of a system for developing a voice interaction product based on a scene mode. The output device 540 may include a display device such as a display screen.
The one or more modules are stored in the memory 520 and when executed by the one or more processors 510, perform a method for developing a voice interaction product based on a contextual model in any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in the embodiments of the present application.
The electronic device of the embodiments of the present application exists in various forms, including but not limited to:
(1) mobile communication devices, which are characterized by mobile communication capabilities and are primarily targeted at providing voice and data communications. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) The ultra-mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include PDA, MID, and UMPC devices, such as ipads.
(3) Portable entertainment devices such devices may display and play multimedia content. Such devices include audio and video players (e.g., ipods), handheld game consoles, electronic books, as well as smart toys and portable car navigation devices.
(4) The server is similar to a general computer architecture, but has higher requirements on processing capability, stability, reliability, safety, expandability, manageability and the like because of the need of providing highly reliable services.
(5) And other electronic devices with data interaction functions.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the above technical solutions substantially or contributing to the related art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application
What has been described above are merely some of the embodiments of the present invention. It will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention.

Claims (10)

1. The method for developing the voice interaction product based on the contextual model is characterized by comprising
Responding to an instruction of creating a product, and acquiring a voice interaction configuration item which is open for the product to be created to display on a user interface, wherein the voice interaction configuration item comprises a contextual model configuration option;
responding to a request instruction for configuring the contextual model input on the contextual model configuration option of the user interface, and acquiring parameter information associated with the contextual model configuration option to display on the user interface;
and in response to the received instruction for releasing the created product, acquiring configuration information associated with the contextual model, and generating a custom contextual model storage associated with the created product.
2. The method of claim 1, wherein the parameter information associated with the profile configuration options comprises profile identification information, and optional parameters and parameter values associated with custom profiles.
3. The method of claim 2, wherein the selectable parameters associated with the custom contextual model comprise any combination of one or more of speech skills, speakers, and TTS presentation content;
the acquiring of the configuration information associated with the contextual model includes acquiring contextual model identification information and configuration information of optional parameters and parameter values under the contextual model identification information.
4. The method of any of claims 1 to 3, wherein the number of custom profiles associated with a created product is no less than one.
5. A system for developing voice interaction products based on contextual models is characterized by comprising:
the product configuration module is used for responding to a command of creating a product, and acquiring a voice interaction configuration item which is open for the product to be created to be displayed on a user interface, wherein the voice interaction configuration item comprises a contextual model configuration option;
the contextual model configuration module is used for responding to a contextual model configuration request instruction input on the contextual model configuration option of the user interface, and acquiring parameter information related to the contextual model configuration option to be displayed on the user interface; and
and the issuing module is used for responding to the received instruction for issuing the created product, acquiring first configuration information associated with the contextual model and generating a custom contextual model storage associated with the created product.
6. The system of claim 5, wherein the parameter information associated with the profile configuration options comprises profile identification information, and optional parameters and parameter values associated with custom profiles.
7. The system of claim 6, wherein the selectable parameters associated with the custom contextual model comprise any combination of one or more of speech skills, speakers, and TTS presentation content;
the first configuration information associated with the contextual model comprises contextual model identification information and configuration information of optional parameters and parameter values under the contextual model identification information.
8. The system of claim 7, wherein the number of custom profiles associated with the created product is no less than one.
9. An electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of the method of any one of claims 1-4.
10. Storage medium on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
CN201911365062.1A 2019-12-26 2019-12-26 Method and system for developing voice interaction product based on contextual model Active CN111142833B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911365062.1A CN111142833B (en) 2019-12-26 2019-12-26 Method and system for developing voice interaction product based on contextual model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911365062.1A CN111142833B (en) 2019-12-26 2019-12-26 Method and system for developing voice interaction product based on contextual model

Publications (2)

Publication Number Publication Date
CN111142833A CN111142833A (en) 2020-05-12
CN111142833B true CN111142833B (en) 2022-07-08

Family

ID=70520372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911365062.1A Active CN111142833B (en) 2019-12-26 2019-12-26 Method and system for developing voice interaction product based on contextual model

Country Status (1)

Country Link
CN (1) CN111142833B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112612396A (en) * 2020-12-28 2021-04-06 苏州思必驰信息科技有限公司 Intelligent voice private deployment method and system
CN113160832A (en) * 2021-04-30 2021-07-23 合肥美菱物联科技有限公司 Voice washing machine intelligent control system and method supporting voiceprint recognition
CN113849103A (en) * 2021-10-13 2021-12-28 京东科技信息技术有限公司 Object model mapping method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018102980A1 (en) * 2016-12-06 2018-06-14 吉蒂机器人私人有限公司 Speech interaction method, device and system
CN110234032A (en) * 2019-05-07 2019-09-13 百度在线网络技术(北京)有限公司 A kind of voice technical ability creation method and system
CN110570866A (en) * 2019-09-11 2019-12-13 百度在线网络技术(北京)有限公司 Voice skill creating method, device, electronic equipment and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018102980A1 (en) * 2016-12-06 2018-06-14 吉蒂机器人私人有限公司 Speech interaction method, device and system
CN110234032A (en) * 2019-05-07 2019-09-13 百度在线网络技术(北京)有限公司 A kind of voice technical ability creation method and system
CN110570866A (en) * 2019-09-11 2019-12-13 百度在线网络技术(北京)有限公司 Voice skill creating method, device, electronic equipment and medium

Also Published As

Publication number Publication date
CN111142833A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN111049996B (en) Multi-scene voice recognition method and device and intelligent customer service system applying same
CN111142833B (en) Method and system for developing voice interaction product based on contextual model
CN108984157B (en) Skill configuration and calling method and system for voice conversation platform
CN110442701B (en) Voice conversation processing method and device
CN109196464A (en) User agent based on context
KR20200012933A (en) Shortened voice user interface for assistant applications
CN109637548A (en) Voice interactive method and device based on Application on Voiceprint Recognition
CN109947388B (en) Page playing and reading control method and device, electronic equipment and storage medium
CN111145745B (en) Conversation process customizing method and device
CN111063353B (en) Client processing method allowing user-defined voice interactive content and user terminal
CN110246499B (en) Voice control method and device for household equipment
CN113596508B (en) Virtual gift giving method, device and medium for live broadcasting room and computer equipment
US10147426B1 (en) Method and device to select an audio output circuit based on priority attributes
CN110619878B (en) Voice interaction method and device for office system
CN109948151A (en) The method for constructing voice assistant
CN110660391A (en) Method and system for customizing voice control of large-screen terminal based on RPA (resilient packet Access) interface
Epelde et al. Providing universally accessible interactive services through TV sets: implementation and validation with elderly users
CN103596051A (en) A television apparatus and a virtual emcee display method thereof
US9747070B1 (en) Configurable state machine actions
CN111161734A (en) Voice interaction method and device based on designated scene
US20210098012A1 (en) Voice Skill Recommendation Method, Apparatus, Device and Storage Medium
CN110600021A (en) Outdoor intelligent voice interaction method, device and system
CN110442698B (en) Dialog content generation method and system
CN110473524B (en) Method and device for constructing voice recognition system
CN108924648B (en) Method, apparatus, device and medium for playing video data to a user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 215123 building 14, Tengfei Innovation Park, 388 Xinping street, Suzhou Industrial Park, Suzhou City, Jiangsu Province

Applicant after: Sipic Technology Co.,Ltd.

Address before: 215123 building 14, Tengfei Innovation Park, 388 Xinping street, Suzhou Industrial Park, Suzhou City, Jiangsu Province

Applicant before: AI SPEECH Co.,Ltd.

GR01 Patent grant
GR01 Patent grant