CN111916080A - Voice recognition resource selection method and device, computer equipment and storage medium - Google Patents

Voice recognition resource selection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111916080A
CN111916080A CN202010773211.4A CN202010773211A CN111916080A CN 111916080 A CN111916080 A CN 111916080A CN 202010773211 A CN202010773211 A CN 202010773211A CN 111916080 A CN111916080 A CN 111916080A
Authority
CN
China
Prior art keywords
voice recognition
scene
format
recognition resource
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010773211.4A
Other languages
Chinese (zh)
Inventor
郭晓花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202010773211.4A priority Critical patent/CN111916080A/en
Publication of CN111916080A publication Critical patent/CN111916080A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Abstract

The embodiment of the disclosure provides a method and a device for selecting voice recognition resources, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring a voice recognition resource matched with voice input by a user; identifying a use scene according to the relevant information of the voice input; inquiring a voice recognition resource output format corresponding to the use scene in a preset mapping relation table, wherein the mapping relation table comprises various use scenes and the voice recognition resource output formats corresponding to the use scenes; selecting resources with corresponding output formats from the matched voice recognition resources according to the voice recognition resource output formats corresponding to the use scenes as selection results; and outputting the selection result. According to the voice recognition resource output format determining method and device, the voice recognition resource output format required by the user is determined according to the user using scene, and then resources in the corresponding format are output, so that the accuracy rate of selecting the voice recognition resources and the accuracy rate of the voice recognition output result are improved.

Description

Voice recognition resource selection method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a method for selecting speech recognition resources, a device for selecting speech recognition resources, a computer device, and a computer-readable storage medium.
Background
In recent years, with the development of speech recognition technology, more and more electronic devices are equipped with speech recognition functions to become speech recognition devices, and the accuracy of speech recognition has been a problem that those skilled in the art have been devoted to improvements and improvements. At present, some technologies improve the accuracy of speech input instruction recognition by various methods, and some technologies perform speech recognition according to recognition resources corresponding to a speech input scene to improve the accuracy of speech recognition.
However, in the prior art, accuracy is improved by improving accurate recognition of an input end, or scene matching is performed on voice recognition resources, but a scheme for selecting the voice recognition resources is lacked, so that accuracy of a voice recognition output result is low.
Therefore, it is an urgent problem to provide a scheme for selecting speech recognition resources.
Disclosure of Invention
The present disclosure has been made to at least partially solve the technical problems occurring in the prior art.
According to an aspect of the embodiments of the present disclosure, a method for selecting a speech recognition resource is provided, where the method includes:
acquiring a voice recognition resource matched with voice input by a user;
identifying a use scene according to the relevant information of the voice input;
inquiring a voice recognition resource output format corresponding to the use scene in a preset mapping relation table, wherein the mapping relation table comprises various use scenes and the voice recognition resource output formats corresponding to the use scenes;
selecting resources with corresponding output formats from the matched voice recognition resources according to the voice recognition resource output formats corresponding to the use scenes as selection results; and the number of the first and second groups,
and outputting the selection result.
According to another aspect of the embodiments of the present disclosure, there is provided a speech recognition resource selecting apparatus, including:
an acquisition module configured to acquire a speech recognition resource matching a speech input by a user;
a scene recognition module configured to recognize a usage scene according to the related information of the voice input;
the query module is arranged for querying a voice recognition resource output format corresponding to the use scene recognized by the scene recognition module in a preset mapping relation table, wherein the mapping relation table comprises various use scenes and the voice recognition resource output formats corresponding to the use scenes;
the selection module is arranged to select resources in corresponding output formats from the matched voice recognition resources acquired by the acquisition module according to the voice recognition resource output formats corresponding to the use scenes recognized by the scene recognition module as selection results; and the number of the first and second groups,
and the output module is used for outputting the selection result obtained by the selection module.
According to another aspect of the embodiments of the present disclosure, there is provided a computer device, including a memory and a processor, where the memory stores a computer program, and when the processor runs the computer program stored in the memory, the processor executes the foregoing speech recognition resource selection method.
According to still another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having a computer program stored thereon, where when the computer program is executed by a processor, the processor executes the foregoing speech recognition resource selecting method.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the voice recognition resource selection method provided by the embodiment of the disclosure, instead of matching the use scenes of the voice recognition resources, after the voice recognition resources matched with the voice input by the user are obtained, the output format of the voice recognition resources required by the user is determined according to the use scenes of the user, and then the resources of the corresponding output format are selected from the matched voice recognition resources according to the output format of the voice recognition resources required by the user and output, so that the accuracy of selecting the voice recognition resources is improved, and the accuracy of the voice recognition output result can be improved.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the disclosure. The objectives and other advantages of the disclosure may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosed embodiments and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the example serve to explain the principles of the disclosure and not to limit the disclosure.
Fig. 1 is a schematic flow chart of a speech recognition resource selection method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a speech recognition resource selecting apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, specific embodiments of the present disclosure are described below in detail with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a schematic flow chart of a speech recognition resource selection method according to an embodiment of the present disclosure. The method can be applied to certain portable intelligent equipment, and the equipment has an internet surfing function and also has a voice input and output module and a display module. As shown in fig. 1, the method includes the following steps S101 to S105.
S101, acquiring a voice recognition resource matched with voice input by a user;
s102, identifying a use scene according to the relevant information input by the voice;
s103, inquiring a voice recognition resource output format corresponding to the use scene in a preset mapping relation table;
the mapping relation table comprises various use scenes and voice recognition resource output formats corresponding to the use scenes;
s104, selecting resources with corresponding output formats from the matched voice recognition resources according to the voice recognition resource output formats corresponding to the use scenes as selection results;
and S105, outputting the selection result.
In the embodiment of the disclosure, instead of matching the use scenes of the voice recognition resources, after the voice recognition resources matched with the voice input by the user are obtained, the output format of the voice recognition resources required by the user is determined according to the use scenes of the user, and then the resources of the corresponding output format are selected from the matched voice recognition resources according to the output format of the voice recognition resources required by the user and output, so that the accuracy of selecting the voice recognition resources is improved, and the accuracy of the voice recognition output result can be improved.
In one embodiment, step S101 is specifically: and calling the matched voice recognition resource from the server according to the voice input by the user.
It should be noted that, it is prior art to call a speech recognition resource matched with a speech input by a user from a server, and the embodiment of the present disclosure is not described again.
In one embodiment, the information related to the speech input includes: at least one of the time and the place of voice input, the tone of voice input, the dynamic and static states of the voice input equipment and the noise of the surrounding environment.
In the embodiment of the disclosure, the current scene (i.e. the usage scene) of the user can be identified by analyzing the relevant information of the voice input. For example, if the current time of voice input is in the evening and the place is in the park, and the voice input device is in a slow-moving state, it can be determined that the current user is in a leisure scene; if the current voice input time is working time, the place is a conference room, the voice input equipment is in a static state, and the surrounding environment of the user is in a low noise state, the current user can be judged to be in a conference scene.
In one embodiment, the usage scenario includes: at least one of a work scene, a meeting scene, a leisure scene, an entertainment scene, and a party scene.
In one embodiment, the speech recognition resource output format includes: at least one of a text format, an audio format, a picture format, and a video format.
In the embodiment of the present disclosure, by configuring the speech recognition resource output formats respectively corresponding to the various usage scenarios, a mapping relationship table between the usage scenarios and the corresponding resource output formats may be obtained. The mapping relation table can be stored in a database form for calling.
For example, the output format of the speech recognition resource corresponding to the working scene may be a text format or an audio format; the output format of the voice recognition resource corresponding to the conference scene can be an audio format and a text format; the voice recognition resource output format corresponding to the leisure scene can be an audio format and a video format; the voice recognition resource output format corresponding to the entertainment scene can be a video format and an audio format; the output format of the voice recognition resource corresponding to the party scene can be an audio format and a video format.
In one embodiment, if there is more than one voice recognition resource output format corresponding to the usage scenario in the mapping table, the mapping table further includes priorities of the various voice recognition resource output formats corresponding to the usage scenario.
Correspondingly, step S103 specifically is: and inquiring the priority of various voice recognition resource output formats corresponding to the use scenes in a preset mapping relation table.
Step S104 specifically includes: and selecting the resource of the output format with the highest corresponding priority from the matched voice recognition resources as a selection result according to the priorities of the output formats of the various voice recognition resources corresponding to the use scenes.
For example, if the current scene where the user is located is a working scene, the output formats of the corresponding speech recognition resources are a text format and an audio format, wherein the text format is a first priority, and the audio format is a second priority, then selecting the resources in the text format from the matched speech recognition resources as the selection result according to the priorities of the two speech recognition resource output formats corresponding to the working scene.
In one embodiment, the following steps S106 and S107 are further included between steps S104 and S105.
S106, judging whether the user corrects the selection result, and if the user corrects the selection result, executing the step S107; if the user does not correct the selection result, executing step S105 to directly output the selection result;
and S107, updating the mapping relation table according to the user correction condition.
Specifically, if the user corrects the resource in the first priority output format as the selection result, the mapping relationship table is updated according to the correction condition, for example, the first priority output format is extended backward by one bit in the corresponding usage scenario, that is, the positions of the original first priority output format and the original second priority output format in the table are exchanged; and then, judging whether the user continues to correct or not, and if the user corrects again, updating the mapping relation table again according to the correction condition. And so on until the user no longer revises the selection result.
In the embodiment of the present disclosure, the mapping table is updated according to the user correction condition when the user has corrected the selection result, so that the priority setting of various speech recognition resource output formats corresponding to the usage scenario in the mapping table is more accurate.
According to the voice recognition resource selection method provided by the embodiment of the disclosure, the voice recognition resource output format required by the user is determined according to the use scene of the user, and then the resource in the corresponding format is output according to the voice recognition resource output format required by the user, so that the accuracy of voice recognition resource selection is improved, and the accuracy of a voice recognition output result can be improved.
Fig. 2 is a schematic structural diagram of a speech recognition resource selecting apparatus according to an embodiment of the present disclosure. As shown in fig. 2, the apparatus 2 includes: an acquisition module 21, a scene recognition module 22, a query module 23, a selection module 24 and an output module 25.
Wherein the obtaining module 21 is configured to obtain a speech recognition resource matched with a speech input by a user; the scene recognition module 22 is configured to recognize a usage scene according to the related information of the voice input; the query module 23 is configured to query the voice recognition resource output format corresponding to the usage scenario recognized by the scenario recognition module 22 in a preset mapping relationship table, where the mapping relationship table includes various usage scenarios and the voice recognition resource output formats corresponding to the usage scenarios; the selection module 24 is configured to select, as a selection result, a resource in a corresponding output format from the matched speech recognition resources acquired by the acquisition module 21 according to the speech recognition resource output format corresponding to the usage scenario recognized by the scenario recognition module 22; the output module 25 is configured to output the selection result obtained by the selection module 24.
In the embodiment of the disclosure, instead of matching the use scenes of the voice recognition resources, after the voice recognition resources matched with the voice input by the user are obtained, the output format of the voice recognition resources required by the user is determined according to the use scenes of the user, and then the resources of the corresponding output format are selected from the matched voice recognition resources according to the output format of the voice recognition resources required by the user and output, so that the accuracy of selecting the voice recognition resources is improved, and the accuracy of the voice recognition output result can be improved.
In one embodiment, the obtaining module 21 is specifically configured to call the matching speech recognition resource from the server according to the speech input by the user.
It should be noted that, it is prior art to call a speech recognition resource matched with a speech input by a user from a server, and the embodiment of the present disclosure is not described again.
In one embodiment, the information related to the speech input includes: at least one of the time and the place of voice input, the tone of voice input, the dynamic and static states of the voice input equipment and the noise of the surrounding environment.
In one embodiment, the usage scenario includes: at least one of a work scene, a meeting scene, a leisure scene, an entertainment scene, and a party scene.
In one embodiment, the speech recognition resource output format includes: at least one of a text format, an audio format, a picture format, and a video format.
In one embodiment, if there is more than one voice recognition resource output format corresponding to the usage scenario in the mapping table, the mapping table further includes priorities of the various voice recognition resource output formats corresponding to the usage scenario.
Correspondingly, the query module 23 is specifically configured to query the priorities of the various speech recognition resource output formats corresponding to the usage scenarios identified by the scenario identification module 22 in a preset mapping relationship table.
The selecting module 24 is specifically configured to select, according to the priorities of the output formats of the various speech recognition resources corresponding to the usage scenarios identified by the scenario identifying module 22, a resource of the output format with the highest corresponding priority from the matched speech recognition resources acquired by the acquiring module 21 as a selection result.
In one embodiment, the apparatus 2 further comprises: the device comprises a judging module and an updating module.
The judging module is used for judging whether the user corrects the selection result or not; the updating module is configured to update the mapping relation table according to the user correction condition when the judging module judges that the user corrects the selection result.
In the embodiment of the present disclosure, the mapping table is updated according to the user correction condition when the user has corrected the selection result, so that the priority setting of various speech recognition resource output formats corresponding to the usage scenario in the mapping table is more accurate.
The voice recognition resource selection device provided by the embodiment of the disclosure determines the voice recognition resource output format required by the user according to the user use scene, and outputs the resource of the corresponding format according to the voice recognition resource output format required by the user, so that the accuracy of voice recognition resource selection is improved, and the accuracy of the voice recognition output result can be improved.
Based on the same technical concept, the embodiment of the present disclosure correspondingly provides a computer device, as shown in fig. 3, the computer device 3 includes a memory 31 and a processor 32, the memory 31 stores a computer program, and when the processor 32 runs the computer program stored in the memory 31, the processor 32 executes the foregoing speech recognition resource selection method.
Based on the same technical concept, the embodiment of the present disclosure correspondingly provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the processor executes the foregoing speech recognition resource selection method.
In summary, the speech recognition resource selection method, the speech recognition resource selection device, the computer device, and the storage medium provided in the embodiments of the present disclosure determine the speech recognition resource output format required by the user according to the usage scenario, and then select and output the resource in the corresponding output format from the matched speech recognition resources according to the speech recognition resource output format required by the user, so that the speech recognition resources in the corresponding format are output according to the usage scenario of the user, the accuracy of selecting the speech recognition resources is improved, and the accuracy of the speech recognition output result is improved.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, apparatuses, functional modules/units in the apparatuses disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; while the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.

Claims (10)

1. A speech recognition resource selection method is characterized by comprising the following steps:
acquiring a voice recognition resource matched with voice input by a user;
identifying a use scene according to the relevant information of the voice input;
inquiring a voice recognition resource output format corresponding to the use scene in a preset mapping relation table, wherein the mapping relation table comprises various use scenes and the voice recognition resource output formats corresponding to the use scenes;
selecting resources with corresponding output formats from the matched voice recognition resources according to the voice recognition resource output formats corresponding to the use scenes as selection results; and the number of the first and second groups,
and outputting the selection result.
2. The method according to claim 1, wherein if there is more than one voice recognition resource output format corresponding to a usage scenario in the mapping table, the mapping table further includes priorities of the various voice recognition resource output formats corresponding to the usage scenario;
the querying, in a preset mapping relationship table, a voice recognition resource output format corresponding to the usage scenario specifically includes:
inquiring the priority of various voice recognition resource output formats corresponding to the use scene in a preset mapping relation table;
selecting the resource with the corresponding output format from the matched voice recognition resources according to the voice recognition resource output format corresponding to the use scene as a selection result, specifically:
and selecting the resource of the output format with the highest corresponding priority from the matched voice recognition resources as a selection result according to the priorities of the output formats of the various voice recognition resources corresponding to the use scenes.
3. The method of claim 2, further comprising:
judging whether the user corrects the selection result;
and if the user corrects the selection result, updating the mapping relation table according to the correction condition of the user.
4. The method according to any one of claims 1 to 3,
the information related to the voice input comprises: at least one of the time and the place of voice input, the tone of the voice input, the dynamic and static states of the voice input equipment and the noise of the surrounding environment;
the usage scenario includes: at least one of a work scene, a meeting scene, a leisure scene, an entertainment scene, and a party scene;
the speech recognition resource output format includes: at least one of a text format, an audio format, a picture format, and a video format.
5. A speech recognition resource selection apparatus, comprising:
an acquisition module configured to acquire a speech recognition resource matching a speech input by a user;
a scene recognition module configured to recognize a usage scene according to the related information of the voice input;
the query module is arranged for querying a voice recognition resource output format corresponding to the use scene recognized by the scene recognition module in a preset mapping relation table, wherein the mapping relation table comprises various use scenes and the voice recognition resource output formats corresponding to the use scenes;
the selection module is arranged to select resources in corresponding output formats from the matched voice recognition resources acquired by the acquisition module according to the voice recognition resource output formats corresponding to the use scenes recognized by the scene recognition module as selection results; and the number of the first and second groups,
and the output module is used for outputting the selection result obtained by the selection module.
6. The apparatus according to claim 5, wherein if there is more than one voice recognition resource output format corresponding to a usage scenario in the mapping table, the mapping table further includes priorities of the various voice recognition resource output formats corresponding to the usage scenario;
the query module is specifically configured to query the priority of each voice recognition resource output format corresponding to the use scene recognized by the scene recognition module in a preset mapping relation table;
the selection module is specifically configured to select, according to the priorities of the various speech recognition resource output formats corresponding to the usage scenarios recognized by the scenario recognition module, a resource of an output format with the highest corresponding priority from the matched speech recognition resources acquired by the acquisition module as a selection result.
7. The apparatus of claim 6, further comprising:
a judging module configured to judge whether the user corrects the selection result; and the number of the first and second groups,
and the updating module is arranged to update the mapping relation table according to the user correction condition when the judging module judges that the user corrects the selection result.
8. The apparatus according to any one of claims 5-7,
the information related to the voice input comprises: at least one of the time and the place of voice input, the tone of the voice input, the dynamic and static states of the voice input equipment and the noise of the surrounding environment;
the usage scenario includes: at least one of a work scene, a meeting scene, a leisure scene, an entertainment scene, and a party scene;
the speech recognition resource output format includes: at least one of a text format, an audio format, a picture format, and a video format.
9. A computer device comprising a memory and a processor, the memory having a computer program stored therein, the processor performing the speech recognition resource selection method according to any one of claims 1 to 4 when the processor runs the computer program stored in the memory.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a speech recognition resource selection method according to any one of claims 1 to 4.
CN202010773211.4A 2020-08-04 2020-08-04 Voice recognition resource selection method and device, computer equipment and storage medium Pending CN111916080A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010773211.4A CN111916080A (en) 2020-08-04 2020-08-04 Voice recognition resource selection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010773211.4A CN111916080A (en) 2020-08-04 2020-08-04 Voice recognition resource selection method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111916080A true CN111916080A (en) 2020-11-10

Family

ID=73288091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010773211.4A Pending CN111916080A (en) 2020-08-04 2020-08-04 Voice recognition resource selection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111916080A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112687265A (en) * 2020-12-28 2021-04-20 苏州思必驰信息科技有限公司 Method and system for standardizing reverse text
CN113797012A (en) * 2021-08-27 2021-12-17 广州蓝仕威克医疗科技有限公司 Artificial intelligence wearable device and method capable of automatically adjusting temperature

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103327162A (en) * 2012-03-21 2013-09-25 华为技术有限公司 Contextual model setting method and terminal device
CN107463700A (en) * 2017-08-15 2017-12-12 北京百度网讯科技有限公司 For obtaining the method, apparatus and equipment of information
CN107919120A (en) * 2017-11-16 2018-04-17 百度在线网络技术(北京)有限公司 Voice interactive method and device, terminal, server and readable storage medium storing program for executing
CN108257596A (en) * 2017-12-22 2018-07-06 北京小蓦机器人技术有限公司 It is a kind of to be used to provide the method and apparatus that information is presented in target
CN108293080A (en) * 2015-11-26 2018-07-17 华为技术有限公司 A kind of method of contextual model switching
CN109215652A (en) * 2018-10-16 2019-01-15 深圳Tcl新技术有限公司 Volume adjusting method, device, playback terminal and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103327162A (en) * 2012-03-21 2013-09-25 华为技术有限公司 Contextual model setting method and terminal device
CN108293080A (en) * 2015-11-26 2018-07-17 华为技术有限公司 A kind of method of contextual model switching
CN107463700A (en) * 2017-08-15 2017-12-12 北京百度网讯科技有限公司 For obtaining the method, apparatus and equipment of information
CN107919120A (en) * 2017-11-16 2018-04-17 百度在线网络技术(北京)有限公司 Voice interactive method and device, terminal, server and readable storage medium storing program for executing
CN108257596A (en) * 2017-12-22 2018-07-06 北京小蓦机器人技术有限公司 It is a kind of to be used to provide the method and apparatus that information is presented in target
CN109215652A (en) * 2018-10-16 2019-01-15 深圳Tcl新技术有限公司 Volume adjusting method, device, playback terminal and computer readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112687265A (en) * 2020-12-28 2021-04-20 苏州思必驰信息科技有限公司 Method and system for standardizing reverse text
CN113797012A (en) * 2021-08-27 2021-12-17 广州蓝仕威克医疗科技有限公司 Artificial intelligence wearable device and method capable of automatically adjusting temperature
CN113797012B (en) * 2021-08-27 2022-06-14 广州蓝仕威克医疗科技有限公司 Artificial intelligence wearable device and method capable of automatically adjusting temperature

Similar Documents

Publication Publication Date Title
US10796096B2 (en) Semantic expression generation method and apparatus
CN102497481B (en) Voice dial-up method, Apparatus and system
CN110351715B (en) AT instruction processing method, terminal adapter and computer readable storage medium
CN111916080A (en) Voice recognition resource selection method and device, computer equipment and storage medium
CN103139404A (en) System and method for generating interactive voice response display menu based on voice recognition
CN111506579B (en) Method, program and equipment for generating intelligent contract code
CN113127593B (en) Standard chart generation method and device
US20130251121A1 (en) Method and Apparatus for Converting Text Information
CN110825448A (en) Method, device, electronic equipment and storage medium for realizing mutual calling of multiple service modules
CN108647102B (en) Service request processing method and device of heterogeneous system and electronic equipment
CN110309284B (en) Automatic answer method and device based on Bayesian network reasoning
CN110099179B (en) Number updating method and device
US11463493B2 (en) Method and apparatus for playing media file
CN110708418B (en) Method and device for identifying attributes of calling party
US20200402539A1 (en) Method and device of playing video, and computing device
CN111931797B (en) Method, device and equipment for identifying network to which service belongs
CN113010263A (en) Method, system, equipment and storage medium for creating virtual machine in cloud platform
CN112508524A (en) Electronic approval method, system, device and storage medium
CN108415814B (en) Method for automatically recording field change, application server and computer readable storage medium
CN105957542A (en) Audio file editing method and audio file editing device
CN108234778B (en) Method and device for generating digital graph rule
CN112312148B (en) Business function starting method and device, electronic equipment and storage medium
CN110348552B (en) Jumper addressing method and device, scanner and storage medium
US11211075B2 (en) Service control method, service control apparatus and device
CN112104778B (en) Address book processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination