WO2019128829A1 - Procédé et appareil d'exécution d'action, support d'informations et appareil électronique - Google Patents

Procédé et appareil d'exécution d'action, support d'informations et appareil électronique Download PDF

Info

Publication number
WO2019128829A1
WO2019128829A1 PCT/CN2018/122280 CN2018122280W WO2019128829A1 WO 2019128829 A1 WO2019128829 A1 WO 2019128829A1 CN 2018122280 W CN2018122280 W CN 2018122280W WO 2019128829 A1 WO2019128829 A1 WO 2019128829A1
Authority
WO
WIPO (PCT)
Prior art keywords
scenario
action
information
service scenario
instruction
Prior art date
Application number
PCT/CN2018/122280
Other languages
English (en)
Chinese (zh)
Inventor
王斌
朱兴昌
顾泳飞
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2019128829A1 publication Critical patent/WO2019128829A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4112Peripherals receiving signals from specially adapted client devices having fewer capabilities than the client, e.g. thin client having less processing power or no tuning capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present invention relates to the field of communications, and in particular to an action execution method, apparatus, storage medium, and electronic device.
  • voice control has gradually become possible, and the implementation of voice control actions is also an important development direction in the future.
  • some devices can interact with natural language, ask the weather, query brief information, play music, order goods, and so on.
  • the embodiments of the present invention provide an action execution method, device, storage medium, and electronic device, so as to at least solve the problem that the input control command cannot be intelligently recognized in the related art, resulting in low user experience.
  • an action execution method including: receiving a control instruction, and determining a service scenario in which a user equipment UE is currently located; determining, according to the control instruction and the service scenario, that the UE is to be executed An action of instructing the UE to perform the action.
  • determining that the service scenario in which the UE is currently located includes at least one of: receiving first scenario information from the UE for identifying a service scenario where the UE is currently located, according to the first scenario Determining the service scenario; determining the second scenario information that is stored locally for identifying the service scenario in which the UE is currently located, and determining the service scenario according to the second scenario information.
  • the method further includes: receiving a modification instruction; modifying the locally stored location according to the modification instruction The second scene information is described.
  • determining, according to the control instruction and the service scenario, that the action to be performed by the user equipment comprises: determining that the user equipment is to be determined by analyzing semantics of the control instruction and the service scenario according to predetermined logic The action performed.
  • an action performing method comprising: transmitting a control instruction to an analysis processor; receiving, by the analysis processor, a scenario for identifying a current business scenario according to the control instruction An action instruction returned by the information; an action indicated by the action instruction is executed.
  • the scenario information includes first scenario information
  • the method before receiving the action instruction, the method further includes: sending the first scenario information to the analysis processor.
  • the method before the sending the first scene information to the analysis processor, the method further includes: determining the first scene information by scanning a currently activated program.
  • the scenario information includes second scenario information
  • the method further includes: sending a modification instruction to the analysis processor, where the modification instruction is used to instruct the analysis processor to modify the locally stored Second scene information.
  • an action execution apparatus including: a processing module, configured to receive a control instruction, and determine a service scenario in which the user equipment UE is currently located; and a determining module configured to be according to the control instruction Determining, by the service scenario, an action to be performed by the UE; and an indication module, configured to instruct the UE to perform the action.
  • the processing module when determining the service scenario that the UE is currently located, includes at least one of the following: a first determining unit, configured to receive, from the UE, to identify that the UE is currently located The first scene information of the service scenario is determined according to the first scenario information, and the second determining unit is configured to determine locally stored second scenario information that is used to identify the current service scenario of the UE. Determining the service scenario according to the second scenario information.
  • an action execution apparatus comprising: a first transmitting module configured to send a control instruction to an analysis processor; and a receiving module configured to receive the analysis processor according to the control instruction And an action instruction for returning scene information for identifying the current business scenario; and an execution module configured to execute the action indicated by the action instruction.
  • the scenario information includes first scenario information
  • the device further includes: a second sending module, configured to send the first scenario information to the analysis processor before receiving the action instruction.
  • a storage medium having stored therein a computer program, wherein the computer program is arranged to execute the steps of any one of the method embodiments described above.
  • an electronic device comprising a memory and a processor, wherein the memory stores a computer program, the processor being arranged to run the computer program to perform any of the above methods The steps in the examples.
  • the action indicated by the control instruction can be determined according to the service scenario in which the UE is currently located. Therefore, under different service scenarios, the same control command can control the execution of different actions, thereby realizing the determined service scenario.
  • the purpose of the action control can be realized without inputting a complete control command, that is, the purpose of intelligently identifying the input control command is realized, and the existing control commands in the related art cannot be intelligently recognized, thereby causing the user to The problem of low experience is achieved by improving the user experience.
  • FIG. 1 is a flowchart of a first action execution method according to an embodiment of the present invention
  • FIG. 2 is a block diagram showing the hardware structure of a mobile terminal according to an action execution method according to an embodiment of the present invention
  • FIG. 3 is a flowchart of a second action execution method according to an embodiment of the present invention.
  • FIG. 4 is a block diagram showing the structure of a first type of action execution apparatus according to an embodiment of the present invention.
  • FIG. 5 is a structural block diagram of a second action execution apparatus according to an embodiment of the present invention.
  • Figure 6 is a system block diagram of an embodiment of the present invention.
  • FIG. 7 is a flow chart of voice control according to a specific embodiment of the present invention.
  • FIG. 8 is a flow chart of voice control according to a second embodiment of the present invention.
  • FIG. 9 is a flow chart of voice control according to a third embodiment of the present invention.
  • FIG. 10 is a flow chart of voice control according to a fourth embodiment of the present invention.
  • voice control can be taken as an example.
  • a key experience indicator of the voice control system is whether the user's speaking mode is natural, and the more natural the user speaks, the more inclined the colloquial language is, the better the user experience is.
  • the current understanding of the user's language relies mainly on the language itself, and is analyzed by techniques such as context, non-linear processor (NLP), parse tree, etc., that is, all the sources of analysis information are the language itself.
  • NLP non-linear processor
  • the current scene service is a very important factor. Different scene services have different meanings in the same sentence.
  • the user says “turn off the lights”. If the user is in the living room, the actual meaning is “lights off the living room”; if it is in the study, the actual meaning is “lights off the study room”.
  • FIG. 1 is a flowchart of a first action execution method according to an embodiment of the present invention. As shown in FIG. 1, the process includes the following steps:
  • Step S102 receiving a control instruction, and determining a service scenario in which the user equipment UE is currently located;
  • Step S104 determining, according to the foregoing control instruction and the service scenario, an action to be performed by the UE;
  • Step S106 instructing the UE to perform the above action.
  • the above operation may be performed by a processor (for example, an analysis processor for analyzing control commands and a service scenario), and the processor may be integrated in the UE or may be independent of the UE, for example, Located in another UE, or on the network side, where the correspondence between the processor and the UE may be one-to-many, that is, one processor may serve multiple terminals, and the control instructions are combined with the service scenario.
  • Analysis is actually an analysis of the user's actual control intent, that is, a processor can perform user intent analysis of control commands for multiple terminals.
  • the action indicated by the control instruction may be determined according to the service scenario in which the UE is currently located. Therefore, in different service scenarios, the same control command may control the execution of different actions, thereby implementing the determined service scenario.
  • the purpose of the action control can be realized without inputting a complete control command, that is, the purpose of intelligently identifying the input control command is realized, and the existing control commands in the related art cannot be intelligently recognized, thereby causing the user to The problem of low experience is achieved by improving the user experience.
  • determining that the service scenario in which the UE is currently located may be determined by multiple determining manners, for example, receiving first scenario information for identifying a service scenario in which the UE is currently located, according to the first scenario information. Determining the service scenario; or determining the second scenario information that is stored locally to identify the service scenario in which the UE is currently located, and determining the service scenario according to the second scenario information. That is, in this embodiment, the information of the service scenario may be sent by another device (for example, a UE) (the first scenario information may be sent while the control command is sent), or may be pre-stored in the process. Inside the device.
  • the method before or after determining the locally stored second scenario information used to identify the service scenario in which the UE is currently located, the method further includes: receiving a modification instruction; modifying the local storage according to the modification instruction Second scene information.
  • the present embodiment is mainly directed to the case where the second scenario information of the service scenario of the UE is stored locally in the processor.
  • the scenario information stored locally by the processor needs to correspond to the actual service scenario of the UE. When the actual service scenario of the UE changes, it is required to update the scenario information of the UE locally stored by the processor.
  • one processor may correspond to multiple UEs, and when one processor corresponds to multiple The UE needs to store the corresponding relationship between the scenario information and the UE locally in the processor, so as to conveniently find the scenario information corresponding to a certain UE.
  • determining, by the control instruction and the service scenario, the action to be performed by the user equipment comprises: determining, by analyzing the semantics of the control instruction and the service scenario according to the predetermined logic, the action to be performed by the user equipment.
  • a plurality of analysis technologies may be used.
  • the NLP technology may be adopted, and the parse tree technology may be used to determine the current UE in the scenario by analyzing the service scenario (for example, In the scene of music playing, or the scene of video playing, or other scenes, comprehensively analyzing the control instruction and the business scenario through predetermined logic to determine the actual intention of the executor of the input control instruction, thereby determining the pending execution of the UE. action.
  • FIG. 2 is a hardware structural block diagram of a mobile terminal according to an action execution method according to an embodiment of the present invention.
  • mobile terminal 20 may include one or more (only one of which is shown in FIG. 2) processor 202 (processor 202 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA. ), a memory 204 configured to store data, and a transmission device 206 configured as a communication function.
  • processor 202 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA.
  • memory 204 configured to store data
  • a transmission device 206 configured as a communication function.
  • the structure shown in FIG. 2 is merely illustrative and does not limit the structure of the above electronic device.
  • the mobile terminal 20 may also include more or fewer components than those shown in FIG. 2, or have a different configuration than that shown in FIG. 2.
  • the memory 204 may be configured as a software program and a module for storing application software, such as program instructions/modules corresponding to the action execution method in the embodiment of the present invention, and the processor 202 executes each of the software programs and modules stored in the memory 204.
  • a functional application and data processing, that is, the above method is implemented.
  • Memory 204 can include high speed random access memory and can also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 204 can further include memory remotely located relative to processor 202, which can be connected to mobile terminal 20 over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Transmission device 206 is arranged to receive or transmit data via a network.
  • the above specific network example may include a wireless network provided by a communication provider of the mobile terminal 20.
  • the transmission device 206 includes a Network Interface Controller (NIC) that can be connected to other network devices through a base station to communicate with the Internet.
  • the transmission device 206 can be a Radio Frequency (RF) module configured to communicate with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • FIG. 3 is a flowchart of a second action execution method according to an embodiment of the present invention. As shown in FIG. 3, the process includes the following steps:
  • Step S302 sending a control instruction to the analysis processor
  • Step S304 receiving an action instruction returned by the analysis processor according to the control instruction and the scenario information for identifying the current service scenario;
  • step S306 the action indicated by the above action command is executed.
  • the executor performing the foregoing action may be the UE, and the scenario information is used to identify scenario information of the service scenario in which the UE is currently located.
  • the action indicated by the control instruction may be determined according to the service scenario in which the UE is currently located. Therefore, in different service scenarios, the same control command may control the execution of different actions, thereby implementing the determined service scenario.
  • the purpose of the action control can be realized without inputting a complete control command, and the problem of controlling the action that needs to input a complete control command in the related art can be effectively solved, and the effect of improving the user experience is achieved.
  • the scenario information may be sent by the UE to the analysis processor, or may be stored locally in the analysis processor.
  • the method Before receiving the above action instruction, the method further includes: transmitting the first scene information to the analysis processor.
  • the scene information includes first scene information, and the first scene information may be sent to the analysis processor together with the foregoing control instruction.
  • the method before the sending the first scene information to the analysis processor, the method further includes: determining the first scene information by scanning the currently activated program.
  • the currently activated program is a currently activated program in the UE, and the current service scenario of the UE can be determined by scanning the currently activated program.
  • the scenario information includes second scenario information, where the method further includes: sending a modification instruction to the analysis processor, where the modification instruction is used to instruct the analysis processor to modify the second stored locally.
  • Scene information may be pre-stored in the analysis processor, and the scenario information stored locally by the analysis processor needs to be updated in real time according to the actual service scenario of the UE.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
  • an action execution device is also provided, which is used to implement the above-mentioned embodiments and preferred embodiments, and will not be described again.
  • the term “module” may implement a combination of software and/or hardware of a predetermined function.
  • the apparatus described in the following embodiments is preferably implemented in software, hardware, or a combination of software and hardware, is also possible and contemplated.
  • FIG. 4 is a structural block diagram of a first type of action execution apparatus according to an embodiment of the present invention.
  • the apparatus may be applied to a processor.
  • the apparatus includes a processing module 42, a determining module 44, and an indicating module 46.
  • the device includes a processing module 42, a determining module 44, and an indicating module 46.
  • the processing module 42 is configured to receive the control command, and determine a service scenario in which the user equipment UE is currently located; the determining module 44 is connected to the processing module 42, and is configured to determine, according to the foregoing control instruction and the service scenario, an action to be performed by the UE; The module 46 is connected to the determining module 44, and is configured to instruct the UE to perform the above actions.
  • the processing module 42 when determining the current service scenario of the UE, includes at least one of the following: a first determining unit, configured to receive, from the UE, the current location of the UE The first scene information of the service scenario is determined according to the first scenario information, and the second determining unit is configured to determine locally stored second scenario information for identifying a service scenario in which the UE is currently located, according to the second The scenario information determines the business scenario.
  • the first action execution apparatus is further configured to receive a modification instruction before or after determining the locally stored second scene information for identifying the service scenario where the UE is currently located;
  • the modification instruction modifies the locally stored second scene information.
  • the determining module 44 is configured to determine an action to be performed by the user equipment by analyzing the semantics of the control instruction and the service scenario according to predetermined logic.
  • FIG. 5 is a structural block diagram of a second action execution apparatus according to an embodiment of the present invention.
  • the apparatus may be applied to a UE.
  • the apparatus includes a first sending module 52, a receiving module 54 and an executing module 56.
  • the device is described below:
  • the first sending module 52 is configured to send a control command to the analyzing processor
  • the receiving module 54 is connected to the first sending module 52, and is configured to receive the analyzing processor according to the control command and to identify the current service scenario.
  • the action instruction returned by the scene information; the execution module 56 is connected to the receiving module 54 and configured to execute the action indicated by the action command.
  • the scenario information includes first scenario information
  • the device further includes: a second sending module, configured to send the first scenario information to the analysis processor before receiving the action instruction.
  • the second action execution apparatus is further configured to determine the first scene information by scanning the currently activated program before transmitting the first scene information to the analysis processor.
  • the scenario information includes second scenario information, where the second action execution apparatus is further configured to send a modification instruction to the analysis processor, where the modification instruction is used to instruct the analysis processor to modify the local The second scene information stored above.
  • the present invention will be generally described below in conjunction with a UE and an analysis processor.
  • the basic system module in the embodiment of the present invention is as shown in FIG. 6, wherein:
  • Terminal device (corresponding to the UE described above).
  • a device that can receive a user voice command (a type of control command that is not limited to a voice command) and performs related actions.
  • a user voice command a type of control command that is not limited to a voice command
  • the terminal device not only accepts voice control commands, but also accepts other control commands, such as a set top box, a remote control, a mobile phone touch screen, a somatosensory device, and the like.
  • User intent analysis module (corresponding to the above processor, which may be an analysis processor).
  • the user's intention is determined according to information such as voice provided by the terminal device, and the intention is given to the terminal device for execution.
  • the user's intention is judged by three parts of information: 1) the language information and the context information it represents, the information comes from the user language; 2) the business logic information, which is preset, and specifically analyzed Business related; 3) Business scenario information.
  • the terminal device and the user intent analysis module may be in a many-to-one relationship, and the user intent analysis module may be deployed on the network, and may provide services for multiple terminal devices at the same time.
  • the terminal device sends a voice user instruction analysis module such as a voice command through the network to request analysis.
  • the voice command here may be voice sample information, or may be voice sample information obtained by voice recognition to obtain text text, and the specific form depends on the ability of the user to analyze the module.
  • the two methods do not affect the core meaning of the embodiment of the present invention, but are different engineering implementation manners.
  • Step 1 The terminal device receives the user voice command and sends a request message to the user intent analysis module.
  • the request includes at least two pieces of information: 1) information of the voice instruction itself, text or voice sampling information; 2) information about the current service scene of the terminal device, which is obtained by other technical means, independent of the voice instruction information. Outside.
  • Step 2 After receiving the request message, the user intention analysis module analyzes the semantic analysis of the voice instruction information, the current business scenario information, and the preset business logic to determine the user intention. And return the user's intention to the terminal device.
  • Step 3 The terminal device receives the user intent message and performs corresponding processing.
  • business scenarios can include movies, TV, games, photos, and more.
  • FIG. 7 is a flow chart of voice control according to a specific embodiment of the present invention. As shown in FIG. 7, the process includes the following steps:
  • S701 When the service scenario of the set top box changes, the current service scenario information is recorded.
  • the operations herein include, but are not limited to, voice commands, remote control commands, smart terminal touch screen operations, etc., gesture commands, and the like.
  • the set-top box receives the voice instruction, and requests the user intention analysis module to perform the user intention analysis.
  • the information carried in the message includes at least: the voice instruction information and the current service scene information recorded by the set-top box.
  • the voice command information here may be text information or voice sample information according to the description of the previous 602 module.
  • the user intention analysis module receives the request message, analyzes the semantic analysis of the voice instruction information, the business logic, and the current business scenario, and determines the user intention.
  • the current service scenario of the current step comes from the request message of the terminal device.
  • S704 The user intent analysis module returns behavior intention information to the set top box.
  • the set top box receives the behavior intention information and performs a corresponding operation.
  • FIG. 8 is a flow chart of voice control according to a second embodiment of the present invention. As shown in FIG. 8, the process includes the following steps:
  • S801 The service environment of the set top box changes, and the message of modifying the service scenario is sent to the user intent analysis module to remind the user to modify the current service scenario of the set top box.
  • the user intent analysis module receives the message, and modifies the current service scenario information of the set-top box of the record, and ensures that the recorded service scenario information is consistent with the actual service scenario of the set-top box.
  • the set-top box receives the voice instruction, and requests the user intention analysis module to perform user intention analysis, where the message carries specific voice instruction information.
  • the voice command information here may be text information or voice sample information according to the description of the previous 602 module.
  • the user intention analysis module receives the request message, analyzes the semantic analysis of the voice instruction information, the business logic, and the current business scenario, and determines the user intention.
  • the current business scenario comes from the record of the user intent analysis module.
  • S806 The set top box receives the message and performs corresponding processing.
  • FIG. 9 is a flow chart of voice control according to a third embodiment of the present invention. As shown in FIG. 9, the process includes the following steps:
  • the set-top box receives the voice instruction, and requests the user intention analysis module to perform user intention analysis.
  • the information carried in the message includes at least: voice instruction information and current service scene information of the set top box.
  • the voice command information here may be text information or voice sample information according to the description of the previous 602 module.
  • the current business scenario is obtained in the following manner: scanning a program currently activated at the front end of the television display to obtain a current business corresponding business scenario.
  • the user intention analysis module receives the request message, analyzes the semantic analysis of the voice instruction information, the business logic, and the current business scenario, and determines the user intention.
  • the current service scenario of the current step comes from the request message of the terminal device.
  • S903 The user intention analysis module returns behavior intention information to the set top box.
  • the set top box receives the behavior intention information and performs a corresponding operation.
  • FIG. 10 is a flow chart of voice control according to a fourth embodiment of the present invention. As shown in FIG. 10, the process includes the following steps:
  • S1001 preset home robot scene information, such as a living room, a study room, a facsimile, a listening song, etc., and the home robot has a technical means to judge the current business scene.
  • the home robot receives the owner voice command and sends a request message to the user intent analysis module.
  • the message contains at least two pieces of information: 1) the information of the voice instruction itself; 2) the current scene information, such as the current location (eg, living room, study, etc.), or the current owner behavior (watching, listening) Songs, etc.).
  • the voice command information here may be text information or voice sample information according to the description of the previous 602 module.
  • the user intent analysis module receives the request message, and combines the semantic analysis of the voice instruction information, the business logic, and the current business scenario to analyze the user intention.
  • the current service scenario of the current step comes from the request message of the terminal device.
  • S1004 The user intention analysis module returns the intention information to the home robot.
  • S1005 The home robot receives the message and performs corresponding processing.
  • each of the above modules may be implemented by software or hardware.
  • the foregoing may be implemented by, but not limited to, the foregoing modules are all located in the same processor; or, the above modules are in any combination.
  • the forms are located in different processors.
  • Embodiments of the present invention also provide a storage medium having stored therein a computer program, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
  • the foregoing storage medium may include, but is not limited to, a USB flash drive, a Read-Only Memory (ROM), and a Random Access Memory (RAM).
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • Embodiments of the present invention also provide an electronic device comprising a memory and a processor having a computer program stored therein, the processor being arranged to execute a computer program to perform the steps of any of the method embodiments described above.
  • the electronic device may further include a transmission device and an input and output device, wherein the transmission device is connected to the processor, and the input and output device is connected to the processor.
  • modules or steps of the present invention described above can be implemented by a general-purpose computing device that can be centralized on a single computing device or distributed across a network of multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device such that they may be stored in the storage device by the computing device and, in some cases, may be different from the order herein.
  • the steps shown or described are performed, or they are separately fabricated into individual integrated circuit modules, or a plurality of modules or steps thereof are fabricated as a single integrated circuit module.
  • the invention is not limited to any specific combination of hardware and software.
  • an action execution method, apparatus, storage medium, and electronic device provided by an embodiment of the present invention have the following beneficial effects: effectively solving the problem that the input control command cannot be intelligently recognized in the related art, thereby causing user experience. Low problems achieve the effect of improving the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé et un appareil d'exécution d'action, un support d'informations, et un appareil électronique. Le procédé comprend les étapes consistant : à recevoir une instruction de commande, et à déterminer un site de desserte dans lequel un équipement utilisateur (UE) se trouve actuellement (S102) ; à déterminer, en fonction de l'instruction de commande et du site de desserte, une action devant être exécutée par l'UE (S104) ; et à ordonner à l'UE d'exécuter l'action (S106). Le procédé permet d'atteindre le but de la mise en œuvre de la commande d'une action sans qu'il soit nécessaire d'entrer une instruction de commande complète dans un site de desserte déterminé, et de résoudre efficacement le problème, présent dans l'état de la technique pertinent, d'impossibilité à reconnaître de manière intelligente une instruction de commande d'entrée, ce qui provoque une mauvaise expérience utilisateur ; par conséquent, une amélioration de l'expérience utilisateur est obtenue.
PCT/CN2018/122280 2017-12-28 2018-12-20 Procédé et appareil d'exécution d'action, support d'informations et appareil électronique WO2019128829A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711461013.9 2017-12-28
CN201711461013.9A CN108197213A (zh) 2017-12-28 2017-12-28 动作执行方法、装置、存储介质及电子装置

Publications (1)

Publication Number Publication Date
WO2019128829A1 true WO2019128829A1 (fr) 2019-07-04

Family

ID=62585377

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/122280 WO2019128829A1 (fr) 2017-12-28 2018-12-20 Procédé et appareil d'exécution d'action, support d'informations et appareil électronique

Country Status (2)

Country Link
CN (1) CN108197213A (fr)
WO (1) WO2019128829A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112073471A (zh) * 2020-08-17 2020-12-11 青岛海尔科技有限公司 设备的控制方法和装置、存储介质及电子装置
CN112130459A (zh) * 2020-09-16 2020-12-25 青岛海尔科技有限公司 状态信息显示方法、装置、存储介质及电子装置
CN115801855A (zh) * 2023-02-06 2023-03-14 广东金朋科技有限公司 设备控制方法、装置、电子设备和存储介质
CN116132209A (zh) * 2023-01-31 2023-05-16 青岛海尔科技有限公司 场景的构建方法和装置、存储介质及电子装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197213A (zh) * 2017-12-28 2018-06-22 中兴通讯股份有限公司 动作执行方法、装置、存储介质及电子装置
CN111627442A (zh) * 2020-05-27 2020-09-04 星络智能科技有限公司 一种语音识别方法、处理器、系统、计算机设备和可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150181155A1 (en) * 2013-10-25 2015-06-25 Joseph Rumteen Mobile device video decision tree
CN106683662A (zh) * 2015-11-10 2017-05-17 中国电信股份有限公司 一种语音识别方法和装置
CN107146622A (zh) * 2017-06-16 2017-09-08 合肥美的智能科技有限公司 冰箱、语音交互系统、方法、计算机设备、可读存储介质
CN108197213A (zh) * 2017-12-28 2018-06-22 中兴通讯股份有限公司 动作执行方法、装置、存储介质及电子装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1744071A (zh) * 2004-08-31 2006-03-08 英业达股份有限公司 一种虚拟场景交互式语言学习系统及其方法
CN106855796A (zh) * 2015-12-09 2017-06-16 阿里巴巴集团控股有限公司 一种数据处理方法、装置和智能终端
CN106855771A (zh) * 2015-12-09 2017-06-16 阿里巴巴集团控股有限公司 一种数据处理方法、装置和智能终端
CN105956009B (zh) * 2016-04-21 2019-09-06 深圳大数点科技有限公司 一种实时应景内容匹配与推送的方法
CN107507616B (zh) * 2017-08-29 2021-06-25 美的智慧家居科技有限公司 网关场景的设置方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150181155A1 (en) * 2013-10-25 2015-06-25 Joseph Rumteen Mobile device video decision tree
CN106683662A (zh) * 2015-11-10 2017-05-17 中国电信股份有限公司 一种语音识别方法和装置
CN107146622A (zh) * 2017-06-16 2017-09-08 合肥美的智能科技有限公司 冰箱、语音交互系统、方法、计算机设备、可读存储介质
CN108197213A (zh) * 2017-12-28 2018-06-22 中兴通讯股份有限公司 动作执行方法、装置、存储介质及电子装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112073471A (zh) * 2020-08-17 2020-12-11 青岛海尔科技有限公司 设备的控制方法和装置、存储介质及电子装置
CN112073471B (zh) * 2020-08-17 2023-07-21 青岛海尔科技有限公司 设备的控制方法和装置、存储介质及电子装置
CN112130459A (zh) * 2020-09-16 2020-12-25 青岛海尔科技有限公司 状态信息显示方法、装置、存储介质及电子装置
CN116132209A (zh) * 2023-01-31 2023-05-16 青岛海尔科技有限公司 场景的构建方法和装置、存储介质及电子装置
CN115801855A (zh) * 2023-02-06 2023-03-14 广东金朋科技有限公司 设备控制方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN108197213A (zh) 2018-06-22

Similar Documents

Publication Publication Date Title
WO2019128829A1 (fr) Procédé et appareil d'exécution d'action, support d'informations et appareil électronique
JP6999594B2 (ja) 映像再生方法及び装置
CN109658932B (zh) 一种设备控制方法、装置、设备及介质
JP6616473B2 (ja) ページを制御する方法および装置
US10448082B2 (en) Information exchanging method and device, audio terminal and computer-readable storage medium
US10311877B2 (en) Performing tasks and returning audio and visual answers based on voice command
JP6867441B2 (ja) 音声要求を処理するための方法および装置
US20170097974A1 (en) Resolving conflicts within saved state data
CN105072143A (zh) 基于人工智能的智能机器人与客户端的交互系统
JP7353497B2 (ja) 能動的に対話の開始を提起するためのサーバ側処理方法及びサーバ、並びに能動的に対話の開始が提起できる音声インタラクションシステム
US11270690B2 (en) Method and apparatus for waking up device
US11057664B1 (en) Learning multi-device controller with personalized voice control
CN107146608B (zh) 一种播放控制方法、装置及智能设备
US10097895B2 (en) Content providing apparatus, system, and method for recommending contents
US20170195384A1 (en) Video Playing Method and Electronic Device
US9332401B2 (en) Providing dynamically-translated public address system announcements to mobile devices
CN110177300B (zh) 程序运行状态的监控方法、装置、电子设备和存储介质
US20200098367A1 (en) Output for improving information delivery corresponding to voice request
CN109862100B (zh) 用于推送信息的方法和装置
JP2019050554A (ja) 音声サービスを提供するための方法および装置
CN105302925A (zh) 推送语音搜索数据的方法和装置
CN113672748A (zh) 多媒体信息播放方法及装置
CN105812845A (zh) 一种媒体资源推送方法、系统和基于Android系统的媒体播放器
JP2021530905A (ja) ビデオ処理方法、装置、端末及び記憶媒体
CN111161734A (zh) 基于指定场景的语音交互方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18893937

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.11.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18893937

Country of ref document: EP

Kind code of ref document: A1