CN111160318B - Electronic equipment control method and device - Google Patents

Electronic equipment control method and device Download PDF

Info

Publication number
CN111160318B
CN111160318B CN202010014503.XA CN202010014503A CN111160318B CN 111160318 B CN111160318 B CN 111160318B CN 202010014503 A CN202010014503 A CN 202010014503A CN 111160318 B CN111160318 B CN 111160318B
Authority
CN
China
Prior art keywords
user
limb
electronic equipment
limb action
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010014503.XA
Other languages
Chinese (zh)
Other versions
CN111160318A (en
Inventor
戚耀文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Shanghai Xiaodu Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Shanghai Xiaodu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd, Shanghai Xiaodu Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010014503.XA priority Critical patent/CN111160318B/en
Publication of CN111160318A publication Critical patent/CN111160318A/en
Application granted granted Critical
Publication of CN111160318B publication Critical patent/CN111160318B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4418Suspend and resume; Hibernate and awake

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a control method and a control device for electronic equipment, and relates to the technical field of artificial intelligence. The specific implementation scheme is as follows: after the electronic equipment discovers that a user enters the identification range of the camera, the electronic equipment identifies a first limb action of the user, and judges whether the first limb action is a limb action corresponding to a first phone operation or not, wherein the first phone operation is related to a wake-up instruction. If the electronic equipment judges that the first limb movement is the limb movement corresponding to the first conversation related to the wake-up instruction, the electronic equipment enters a wake-up state. In the process, the user can wake up the electronic equipment through the limb action, the user does not need to control the electronic equipment in a voice mode, the purpose of waking up the electronic equipment when the user is inconvenient to make voice is achieved, and meanwhile the purpose of waking up the electronic equipment in a noisy environment is achieved.

Description

Electronic equipment control method and device
Technical Field
The embodiment of the disclosure relates to the technical field of artificial intelligence (artificial intelligence, AI), in particular to a method and a device for controlling electronic equipment.
Background
With the rapid development of technology, people are full of technological and intelligent products in daily life. Among a plurality of products, the intelligent sound box is simple and convenient in use method due to small size, and is favored by users.
Typically, users often interact with the smart speakers through voice. For example, initially, a user issues a wake-up instruction to the intelligent speaker to wake up the intelligent speaker, and then, the user issues a function instruction to the intelligent speaker to interact with the intelligent speaker. Taking an intelligent sound box as an example, after a user sends a wake-up word to wake up the intelligent sound box, the user continues to say "play song" Daoxiang ", and then the intelligent sound box obtains audio resources of" Daoxiang "from the local through a network so as to play.
In the above-mentioned intelligent audio amplifier awakening process, need the user to input the instruction of awakening through the speech input mode. However, when the user is inconvenient to make a voice, the smart speaker cannot be awakened, and thus cannot be controlled.
Disclosure of Invention
The embodiment of the disclosure relates to an intelligent device awakening method, which wakes up the intelligent device through limb actions to control electronic devices, so that the aim of controlling the electronic devices is fulfilled when a user is inconvenient to make a voice. …
In a first aspect, an embodiment of the present application provides a method for controlling an electronic device, including: and identifying a first limb action of a user, judging whether the first limb action is a limb action corresponding to a first phone call, wherein the first phone call is related to a wake-up instruction, and if the first limb action is the limb action corresponding to the first phone call, controlling the electronic equipment to enter a wake-up state. By adopting the scheme, the user can wake up the electronic equipment through the limb action, the user does not need to control the electronic equipment in a voice mode, the purpose of waking up the electronic equipment when the user is inconvenient to make voice is achieved, and meanwhile, the purpose of waking up the electronic equipment in a noisy environment is achieved.
In one possible design, if the first session is further related to an operation instruction, the controlling the electronic device to enter the awake state further includes: and executing the operation instruction associated with the first conversation. By adopting the scheme, the aim of controlling the electronic equipment to execute operations related to the gestures while waking up the electronic equipment through the gestures is fulfilled.
In a possible design, after the electronic device is controlled to enter the awake state if the first limb motion is the limb motion corresponding to the first session, the method further includes: obtaining a second session, the second session being used to supplement the first session; synthesizing a third phone from the first phone and the second phone, the third phone being related to an operation instruction; and controlling the electronic equipment by using the operation instruction related to the third conversation. By adopting the scheme, the purpose of controlling the electronic equipment by utilizing the splicing technology is realized.
In a possible design, the acquiring the second session includes: identifying a second limb action of the user; determining the second session from the second limb motion. By adopting the scheme, the user combines limb actions with voice to control the electronic equipment, and the flexibility is high.
In a possible design, the acquiring the second session includes: collecting voice signals sent by the user; the second utterance is determined from the speech signal. By adopting the scheme, the user controls the electronic equipment through the combination of a plurality of limb actions, and the flexibility is high
In one possible design, before the identifying the first limb action of the user, the method further includes: displaying a setting interface for setting the corresponding relation between limb actions and the conversation, wherein a text box is displayed on the setting interface, the text box is used for the user to customize the conversation, and at least one of the following buttons is also displayed on the setting interface: a system limb action selection button for indicating the user to select a limb action preset by the system; a custom limb action selection button for instructing the user to select a pre-stored limb action; the user-defined limb action shooting button is used for indicating the user to start a camera of the electronic equipment and shooting limb actions; and storing the corresponding relation between the limb actions and the speech of the user according to the operation of the user on the setting interface. By adopting the scheme, the purpose that the user autonomously sets the corresponding relation between the limb actions and the speech operation is realized.
In one possible design, the text boxes are multiple, and different text boxes in the multiple text boxes are used for displaying different dialogs; after the setting interface for setting the correspondence between the limb actions and the speech surgery is displayed, the method further comprises the following steps: identifying a touch instruction of a user dragging any one of the text boxes; and adjusting the sequence of the text boxes in the text boxes according to the touch instruction. By adopting the scheme, the aim of autonomously adjusting the sequence of the dialects corresponding to the gestures by the user is fulfilled.
In a second aspect, an embodiment of the present application provides an electronic device control apparatus, including:
the identification module is used for identifying the first limb action of the user;
the judging module is used for judging whether the first limb action is a limb action corresponding to a first conversation, and the first conversation is related to a wake-up instruction;
and the awakening module is used for controlling the electronic equipment to enter an awakening state if the judging module judges that the first limb movement is the limb movement corresponding to the first conversation.
In a possible design, the device further comprises: and the control module is used for executing the operation instruction associated with the first conversation after the awakening module controls the electronic equipment to enter the awakening state if the first conversation is also related to the operation instruction.
In a possible design, the device further comprises: and the control module is used for acquiring a second conversation after the judgment module judges that the first limb movement is the limb movement corresponding to the first conversation, the awakening module controls the electronic equipment to enter an awakening state, the second conversation is used for supplementing the first conversation, a third conversation is synthesized according to the first conversation and the second conversation, the third conversation is related to an operation instruction, and the electronic equipment is controlled by utilizing the operation instruction related to the third conversation.
In one possible design, the control module, when determining a second session, is configured to identify a second limb motion of the user, and determine the second session based on the second limb motion.
In one possible design, the control module is configured to collect a voice signal sent by the user when determining the second phone, and determine the second phone according to the voice signal.
In a possible design, the device further comprises: the device comprises a display module and a setting module, wherein the display module is used for displaying a setting interface for setting the corresponding relation between the limb actions and the speech skills before the identification module identifies the first limb actions of the user, a text box is displayed on the setting interface, the text box is used for the user to customize speech skills, and at least one of the following buttons is also displayed on the setting interface: a system limb action selection button for indicating the user to select a limb action preset by the system; a custom limb action selection button for instructing the user to select a pre-stored limb action; the user-defined limb action shooting button is used for indicating the user to start a camera of the electronic equipment and shooting limb actions;
The setting module is used for storing the corresponding relation between the limb actions and the speech of the user according to the operation of the user on the setting interface.
In one possible design, the text boxes are multiple, and different text boxes in the multiple text boxes are used for displaying different dialogs; the setting module is used for identifying a touch instruction of a user for dragging any one of the text boxes after the setting interface for setting the corresponding relation between the limb actions and the speech operation is displayed on the display module, and adjusting the sequence of the text boxes in the text boxes according to the touch instruction.
In a third aspect, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the first aspect or any possible implementation of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on an electronic device, cause the electronic device computer to perform the method of the first aspect or various possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer instructions for causing the electronic device to perform the method of the first aspect or the various possible implementations of the first aspect.
In a sixth aspect, an embodiment of the present application provides a method for controlling an electronic device, including: and identifying a first limb action of a user, determining a first conversation corresponding to the first limb action of the user, determining a control instruction corresponding to the first conversation, and controlling the electronic equipment by using the control instruction.
One embodiment of the above application has the following advantages or benefits: after the electronic equipment discovers that a user enters the identification range of the camera, the electronic equipment identifies a first limb action of the user, and judges whether the first limb action is a limb action corresponding to a first phone operation or not, wherein the first phone operation is related to a wake-up instruction. If the electronic equipment judges that the first limb movement is the limb movement corresponding to the first conversation related to the wake-up instruction, the electronic equipment enters a wake-up state. In the process, the user can wake up the electronic equipment through the limb action, the user does not need to control the electronic equipment in a voice mode, the purpose of waking up the electronic equipment when the user is inconvenient to make voice is achieved, and meanwhile the purpose of waking up the electronic equipment in a noisy environment is achieved.
Other effects of the above alternative will be described below in connection with specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present application and are not to be construed as limiting the application. Wherein:
FIG. 1A is a schematic diagram of an operating environment of an electronic device control method provided by an embodiment of the present disclosure;
FIG. 1B is a schematic diagram of an operating environment of an electronic device control method provided by an embodiment of the present disclosure;
fig. 2 is an interface schematic diagram of a setting interface in the electronic device control method according to the embodiment of the present application;
fig. 3 is a schematic diagram of a process of setting a correspondence between a limb action and a speaking operation in a custom manner in the control method of an electronic device according to the embodiment of the present application;
fig. 4 is a schematic diagram of a process of setting a correspondence between a limb motion and a speaking operation in real time in the electronic device control method according to the embodiment of the present application;
FIG. 5 is a schematic diagram of a call adjustment procedure in the electronic device control method according to the embodiment of the present application;
FIG. 6 is a flowchart of an electronic device control method provided by an embodiment of the present disclosure;
fig. 7 is a schematic diagram of interaction between a user and an electronic device in the electronic device control method according to the embodiment of the present application;
FIG. 8 is a schematic diagram of interaction between a user and an electronic device in a control method of the electronic device according to an embodiment of the present application;
Fig. 9 is a schematic diagram of interaction between a user and an electronic device in the electronic device control method according to the embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device control apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of another electronic device control apparatus according to an embodiment of the present application;
fig. 12 is a block diagram of an electronic device for implementing the electronic device control method of the embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present application are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
At present, more and more intelligent electronic devices, such as intelligent sound boxes, home robots, intelligent televisions and the like, enter daily life of people, are used as tools for users to access to the internet by voice, users can play songs, shop on the internet or know weather and the like through the electronic devices, and the devices can be used for controlling intelligent home, such as opening an air conditioner, setting the temperature of a refrigerator and the like.
In general, when the electronic device is used for performing voice interaction with the electronic device, the electronic device is awakened by a wake-up word and the like, and a command is continuously sent to the electronic device in a voice manner. However, when the user is inconvenient to make a voice, the electronic device cannot be awakened, and thus cannot be controlled through the voice. Moreover, when the electronic device is in a noisy environment, the noisy noise may cause the electronic device to fail to correctly recognize the speech uttered by the user.
In view of this, the embodiment of the application provides a method and a device for controlling an electronic device, which wake up an intelligent device through limb actions to control the electronic device, so as to achieve the purpose of controlling the electronic device when a user is inconvenient to make a voice, and simultaneously achieve the purpose of accurately controlling the electronic device in a noisy environment.
Fig. 1A is a schematic diagram of an operation environment of an electronic device control method according to an embodiment of the present disclosure, referring to fig. 1A, where the operation environment includes an electronic device 1, a terminal device 2, and a server 3, and the server 3 establishes network connections with the electronic device 1 and the terminal device 2, respectively. The electronic device 1 is, for example, an intelligent sound box, a management end Application (APP) of the electronic device 1 is loaded on a terminal device 2 of a user, the user opens the management end APP on the terminal device 2, the APP provides a setting interface for setting a correspondence relationship between a limb action and a conversation, the user sets the correspondence relationship between the limb action and the conversation on the interface and stores the correspondence relationship on a server 3, or the electronic device 1 downloads the correspondence relationship from the server 3 and stores the correspondence relationship; alternatively, the electronic apparatus 1 acquires the correspondence relationship from the terminal apparatus 2 by bluetooth communication or the like and saves it. The electronic device 1 is an electronic device with a camera, and when a user makes a certain limb action in front of the camera of the electronic device, the electronic device 1 sends the limb action to the server 3 to determine a conversation corresponding to the limb action, or the electronic device 1 queries a locally stored correspondence according to the limb action, so as to determine the conversation corresponding to the limb action. Then, the electronic device 1 determines a control instruction corresponding to the conversation according to the conversation, and controls the electronic device 1 according to the control instruction.
Fig. 1B is a schematic diagram of an operation environment of an electronic device control method according to an embodiment of the present disclosure, referring to fig. 1B, where the operation environment includes an electronic device 1 and a server 3, and the server 3 establishes a network connection with the electronic device 1. The electronic device 1 is an electronic device having a camera and a display screen, and a management Application (APP) of the electronic device 1 is loaded on the electronic device 1, the APP provides a setting interface for setting a correspondence between a limb motion and a phone, and a user sets the correspondence between the limb motion and the phone on the interface and stores the correspondence on the local or server 3. After a user makes a certain limb action in front of a camera of the electronic device, the electronic device 1 sends the limb action to the server 3 to determine a conversation corresponding to the limb action, or the electronic device 1 queries a locally stored correspondence according to the limb action, so as to determine the conversation corresponding to the limb action. Then, the electronic device 1 determines a control instruction corresponding to the conversation according to the conversation, and controls the electronic device 1 according to the control instruction.
In the architecture shown in fig. 1A and 1B, the electronic device 1 may be an electronic device with at least a camera, such as a home robot, a smart speaker, and a smart television, and may or may not have a display screen. The terminal device 2 may be a mobile phone, a tablet computer, a personal digital assistant (Personal Digital Assistant, PDA), a portable android device (Portable Android Device, PAD) or the like of the user, and the server 3 may be a server corresponding to a certain service or services on the electronic device 1.
Fig. 2 is an interface schematic diagram of a setting interface in the electronic device control method according to the embodiment of the present application. Referring to fig. 2, the setting interface of the management end APP displays a system limb action selection button, a custom limb action selection button, and a shooting custom limb action button in addition to a text box. The interface is used for setting the corresponding relation between the limb actions and the speech operation. The system limb action selection buttons are used for selecting limb actions preset by the system, such as a thumb, a winning gesture, a fist gesture and the like in the figure, and are preset in the electronic equipment before the electronic equipment leaves the factory. The electronic device may be pre-bound to preset the correspondence between the limb motion and the speaking operation before leaving the factory, for example, the speaking operation corresponding to the thumb is pre-bound to play the rice aroma, and this speaking operation is called a systematic speaking operation. In addition, the electronic device may preset only the gesture, and the correspondence between the gesture and the phone is set by the user. For example, when the user selects the limb motion of the thumb in the limb motion of the system and inputs "play rice aroma" in the text box, the limb motion of the thumb and the rice aroma playing speaking operation are bound together.
Fig. 3 is a schematic diagram of a process of setting a correspondence between a limb motion and a speaking operation in a custom manner in the electronic device control method according to the embodiment of the present application. Referring to fig. 3, when the user clicks the preview a of the user-defined limb action area, the electronic device jumps to a picture interface, and various limb action pictures taken or downloaded by the user in advance are stored on the picture interface. And then, selecting one of the limb action pictures by the user, clicking the next step, and changing the previous A preview into a thumbnail of the limb action selected by the user in the interface of the management end APP. Thereafter, the user enters "turn on projector" in the text box, and the gesture selected by the user and the speaking of turning on projector are bound together.
Fig. 4 is a schematic diagram of a process of setting a correspondence between a limb motion and a speaking operation in real time in the electronic device control method according to the embodiment of the present application. Referring to fig. 4, after the user clicks a button for shooting the action of the limb, the electronic device jumps to the shooting mode, and after the user shoots the action of the limb, the user clicks to complete and enters the speaking input interface. Then, the user inputs the speaking skill, such as playing the village 'ease tour', the limb movements of the three fingers are bound together with the speaking skill of playing the village 'ease tour'.
In the above embodiment, the binding of one limb action and one phone operation is taken as an example to describe how to set the corresponding relation between the limb action and the phone operation, however, in the embodiment of the present application, a plurality of phone operations may also be bound to one limb action, after the user makes the limb action against the camera of the electronic device, the electronic device determines a plurality of phone operations corresponding to the limb action, and sequentially executes the control instructions corresponding to each phone operation. For example, if the user self-defined thumb gesture corresponds to three dialects, namely playing the rice aroma, opening the air conditioner and playing the weather forecast, the user makes a gesture of erecting the thumb against the camera of the electronic device, the electronic device plays the rice aroma, and then the air conditioner and the projector are turned on. In addition, the user can also adjust the sequence of the dialects through modes such as dragging, so as to adjust the sequence of the control instructions corresponding to the dialects executed by the electronic equipment. For example, referring to fig. 5, fig. 5 is a schematic diagram of an adjustment procedure in the electronic device control method according to the embodiment of the present application. Referring to fig. 5, after adjustment, the control instruction executed by the electronic device is to play the weather forecast, turn on the air conditioner, and play the rice aroma.
The electronic device control method according to the embodiment of the present application will be described in detail below by taking an electronic device as an example of an electronic device having a camera and a display screen. For example, please refer to fig. 6.
Fig. 6 is a flowchart of an electronic device control method provided in an embodiment of the present disclosure. The embodiment comprises the following steps:
101. a first limb action of the user is identified.
The electronic device is an electronic device provided with a camera. When the electronic equipment is in a dormant state, a user makes a first limb action against the camera, the electronic equipment captures a picture containing the first limb action, and the first limb action is identified from the picture through feature extraction and the like. The first limb action is associated with a wake instruction. For example, a first phone corresponding to a first limb action is played and is played as a wake-up instruction, and the first phone is related to the wake-up instruction; for another example, the first session corresponding to the first limb action is to play Zhou Jielun Daxiang, wherein the play is a wake-up instruction, and the first session is related to the wake-up instruction.
102. Judging whether the first limb action is a limb action corresponding to a first conversation, wherein the first conversation is related to a wake-up instruction.
103. And if the first limb movement is the limb movement corresponding to the first conversation, controlling the electronic equipment to enter an awake state.
For example, the user may make various limb actions against the camera, including gestures, actions, and the like, and if the electronic device determines that the first limb action currently made by the user is a limb action corresponding to the first session, the electronic device is controlled to enter the awake state. For example, when defining the correspondence between the limb actions and the phone, setting the phone corresponding to the gesture of the user for raising the thumb to play Zhou Jielun rice, and after the user makes the limb actions for raising the thumb against the camera of the electronic device, waking up and playing Zhou Jielun rice by the electronic device; for another example, when defining the correspondence between the limb actions and the phone skills, the phone skills corresponding to the gestures of the user for erecting the thumb are set to play, and after the user makes the limb actions of the user for erecting the thumb against the camera of the electronic device, the electronic device enters the awakening state from the dormant state.
According to the electronic equipment control method provided by the embodiment of the disclosure, after the electronic equipment discovers that a user enters the identification range of the camera, the electronic equipment identifies the first limb action of the user and judges whether the first limb action is the limb action corresponding to the first phone operation, and the first phone operation is related to the wake-up instruction. If the electronic equipment judges that the first limb movement is the limb movement corresponding to the first conversation related to the wake-up instruction, the electronic equipment enters a wake-up state. In the process, the user can wake up the electronic equipment through the limb action, the user does not need to control the electronic equipment in a voice mode, the purpose of waking up the electronic equipment when the user is inconvenient to make voice is achieved, and meanwhile the purpose of waking up the electronic equipment in a noisy environment is achieved.
The electronic device control method according to the embodiment of the present application will be described in detail below with several scenarios.
Scene one, custom limb action association.
Illustratively, in the limb motion definition stage, the user selects a gesture for raising the thumb from the system limb motions (or the user can also shoot the limb motions in real time or select pre-shot limb motions from local), and then the user inputs and saves the associated word of the limb motions, for example, playing Dao. Fig. 7 is a schematic diagram of interaction between a user and an electronic device in the method for controlling an electronic device according to an embodiment of the present application. Referring to fig. 7, if a user wants the electronic device to play the rice aroma, the user makes a gesture to the electronic device to erect the thumb, so that the electronic device automatically wakes up and automatically generates the word "play rice aroma", and the electronic device obtains a control instruction for playing rice aroma according to the conversation, obtains the audio resource of rice aroma from the local or network, and plays the audio resource.
Scene two, custom limb action associated phone surgery+user supplemental phone surgery.
In this scenario, the first session is associated with a wake-up instruction and does not indicate an operation of the electronic device after waking up. At this time, after the electronic device recognizes that the first limb motion enters the awake state, a second phone is determined, and the second phone is used for supplementing the first phone. And then the electronic equipment synthesizes a third phone according to the first phone and the second phone, and controls the intelligent equipment by utilizing operation instructions related to the third phone after the electronic equipment wakes up.
For example, after the user makes the first limb action, the user also needs to send out voice to the electronic device, and the electronic device collects the voice signal sent out by the user and determines the second phone according to the voice signal. For example, in the limb motion definition stage, the user selects a gesture for raising the little finger from the system limb motions (or the user can also shoot the limb motions in real time or select pre-shot limb motions from the local place), and then the user inputs and saves the associated voice of the limb motions, for example, playing. Fig. 8 is a schematic diagram of interaction between a user and an electronic device in the method for controlling an electronic device according to an embodiment of the present application. Referring to fig. 8, if the user wants the electronic device to start the play function, the user makes a gesture to the electronic device to erect the little finger, so that the electronic device automatically wakes up, and automatically generates and displays the text "play". Then, the user sends out "Qilixiang of Zhou Jielun" to the electronic device in the voice perception range of the electronic device, and the electronic device collects the voice signal and analyzes the voice signal to obtain Qilixiang of Zhou Jielun. Then, the electronic device integrates the first phone "play" and the second phone Zhou Jielun "qili" to obtain a third phone "play Zhou Jielun qili", and converts the phone into a control instruction that can be recognized by the electronic device, searches the qili from the local or through the network, and plays.
By adopting the scheme, the user combines limb actions with voice to control the electronic equipment, and the flexibility is high.
Scene three, multiple custom action associations.
In this scenario, the first session is associated with a wake-up instruction and does not indicate an operation of the electronic device after waking up. At this time, after the electronic device recognizes that the first limb motion enters the awake state, the electronic device recognizes a second limb motion of the user, and determines the second phone according to the second limb motion, where the second phone is used to supplement the first phone. Then, the electronic equipment synthesizes a third phone according to the first phone and the second phone, and controls the intelligent equipment by utilizing operation instructions related to the third phone after the electronic equipment wakes up
In the scene, a group of limb actions with a certain sequence is preset, and each action in the limb actions with the certain sequence corresponds to one or a group of speech. In this way, after a series of limb movements are sequentially made by the user within the capture range of the camera of the electronic device, the electronic device respectively corresponds the limb movements to a complete conversation according to the sequence of the limb movements, and executes the instruction corresponding to the complete conversation. That is, the user makes a plurality of limb actions before the electronic device with the camera, different limb actions in the limb actions correspond to different phones input by the user in advance, the plurality of phones are combined into a new phone according to the sequence of the limb actions of the user, and the electronic device executes the control instruction corresponding to the new phone.
Fig. 9 is a schematic diagram of interaction between a user and an electronic device in the method for controlling an electronic device according to an embodiment of the present application. Assume that a user predefines a first phone corresponding to a gesture of raising a little finger as play and a second phone corresponding to a victory gesture as a little white rabbit baby song. In the subsequent interaction process between the user and the electronic device, please refer to fig. 9, the user makes a gesture of erecting the little finger against the camera of the electronic device, so that the electronic device is automatically awakened, and the text "play" is automatically generated and displayed. Then, the user makes a winning gesture against the camera of the electronic device, and the electronic device obtains a second conversation "little white rabbit baby song" according to the winning gesture. Then, the electronic device integrates the first phone operation playing and the second phone operation playing to obtain a phone operation playing the baby rabbit song, the phone operation is converted into a control instruction which can be recognized by the electronic device, and the baby rabbit song is searched and played from the local or through a network.
By adopting the scheme, the user controls the electronic equipment through the combination of a plurality of limb actions, so that the flexibility is high.
Fig. 10 is a schematic structural diagram of an electronic device control apparatus according to an embodiment of the present application, and according to fig. 10, the electronic device control apparatus 100 includes:
An identification module 11 for identifying a first limb action of the user;
the judging module 12 is configured to judge whether the first limb action is a limb action corresponding to a first session, where the first session is related to a wake-up instruction;
and the wake-up module 13 is configured to control the electronic device to enter a wake-up state if the judging module 12 judges that the first limb movement is the limb movement corresponding to the first session.
Fig. 11 is a schematic structural diagram of another electronic device control apparatus according to an embodiment of the present application, where the electronic device control apparatus provided in this embodiment further includes, based on the foregoing fig. 10:
the control module 14 is configured to execute the operation instruction associated with the first session after the wake-up module 13 controls the electronic device to enter the wake-up state if the first session is further associated with the operation instruction.
In a possible design, the control module 14 is configured to obtain a second session after the determining module 12 determines that the first limb motion is a limb motion corresponding to the first session, the wake-up module 13 controls the electronic device to enter a wake-up state, where the second session is used to supplement the first session, and a third session is synthesized according to the first session and the second session, where the third session is related to an operation instruction, and the electronic device is controlled by using the operation instruction related to the third session.
In one possible design, the control module 14 is configured to identify a second limb movement of the user when determining a second session, the second session being determined based on the second limb movement.
In one possible design, the control module 14 is configured to collect a voice signal sent by the user when determining the second phone, and determine the second phone according to the voice signal.
Referring to fig. 11 again, the apparatus further includes: the display module 15 and the setting module 16, the display module 15 is configured to display a setting interface for setting a correspondence between a limb action and a conversation, before the identification module 11 identifies a first limb action of a user, a text box is displayed on the setting interface, the text box is used for the user to customize the conversation, and at least one of the following buttons is further displayed on the setting interface: a system limb action selection button for indicating the user to select a limb action preset by the system; a custom limb action selection button for instructing the user to select a pre-stored limb action; the user-defined limb action shooting button is used for indicating the user to start a camera of the electronic equipment and shooting limb actions;
The setting module 16 is configured to store a correspondence between a limb action and a speaking operation of the user according to an operation of the setting interface by the user.
In one possible design, the text boxes are multiple, and different text boxes in the multiple text boxes are used for displaying different dialogs; the setting module 16 identifies a touch instruction for a user to drag any one of the text boxes after the display module 15 displays a setting interface for setting correspondence between limb actions and speech, and adjusts the order of the text boxes in the text boxes according to the touch instruction.
The device provided by the embodiment of the application can be used for the method executed by the electronic equipment in the above embodiment, and the implementation principle and the technical effect are similar, and are not repeated here.
According to an embodiment of the present application, the present application also provides an electronic device and a readable storage medium.
Fig. 12 is a block diagram of an electronic device for implementing the electronic device control method of the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 12, the electronic device includes: one or more processors 21, memory 22, and interfaces for connecting the components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). In fig. 12, a processor 21 is taken as an example.
The memory 22 is a non-transitory computer readable storage medium provided by the present application. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the electronic device control method provided by the application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the electronic device control method provided by the present application.
The memory 22 is used as a non-transitory computer readable storage medium, and may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the identification module 11, the judgment module 12, the wake-up module 13, the control module 14, the display module 15, or the setting module 16 shown in fig. 10 or fig. 11) corresponding to the electronic device control method in the embodiment of the present application. The processor 21 executes various functional applications of the server and data processing, i.e., implements the electronic device control method in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 22.
The memory 22 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the electronic device to control the electronic device, and the like. In addition, the memory 22 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 22 may optionally include memory located remotely from processor 21, which may be connected to the electronic device control electronics via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the electronic device control method may further include: an input device 23 and an output device 24. The processor 21, the memory 22, the input device 23 and the output device 24 may be connected by a bus or otherwise, in fig. 12 by way of example.
The input device 23 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device control electronics, such as a touch screen, keypad, mouse, trackpad, touchpad, pointer stick, one or more mouse buttons, trackball, joystick, and the like. The output means 24 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The embodiment of the application also provides a control method of the electronic equipment, wherein the electronic equipment identifies the first limb action of the user, determines the first conversation corresponding to the first limb action of the user, determines the control instruction corresponding to the first conversation, and controls the electronic equipment by using the control instruction.
According to the technical scheme of the embodiment of the application, after the electronic equipment discovers that the user enters the identification range of the camera, the first limb action of the user is identified, and whether the first limb action is the limb action corresponding to the first phone operation is judged, wherein the first phone operation is related to the wake-up instruction. If the electronic equipment judges that the first limb movement is the limb movement corresponding to the first conversation related to the wake-up instruction, the electronic equipment enters a wake-up state. In the process, a user can wake up the electronic equipment through limb actions, the user does not need to control the electronic equipment in a voice mode, the purpose of waking up the electronic equipment when the user is inconvenient to make voice is achieved, and meanwhile, the purpose of waking up the electronic equipment in a noisy environment is achieved
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed embodiments are achieved, and are not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (10)

1. An electronic device control method, characterized by comprising:
identifying a first limb action of the user;
judging whether the first limb action is a limb action corresponding to a first phone call, wherein the first phone call is related to a wake-up instruction, and the first phone call does not indicate the operation after the electronic equipment is awakened;
if the first limb movement is the limb movement corresponding to the first conversation, controlling the electronic equipment to enter an awake state;
identifying a second limb action of the user;
determining a second session based on the second limb movement, the second session being used to supplement the first session;
combining the first conversation and the second conversation into a third conversation according to the order of the limb actions of the user;
converting the third speech to an operation instruction which can be identified by the electronic equipment;
and controlling the electronic equipment by utilizing the operation instruction.
2. The method of claim 1, wherein prior to the identifying the first limb action of the user, further comprising:
and acquiring the first conversation customized by the user.
3. The method of claim 2, wherein the obtaining the user-defined first utterance comprises:
Displaying a setting interface for setting the corresponding relation between limb actions and the conversation, wherein a text box is displayed on the setting interface, and the text box is used for the user to customize the conversation;
at least one of the following buttons is also displayed on the setup interface: a system limb action selection button for indicating the user to select a limb action preset by the system; a custom limb action selection button for instructing the user to select a pre-stored limb action; the user-defined limb action shooting button is used for indicating the user to start a camera of the electronic equipment and shooting limb actions; before the identifying the first limb action of the user, further comprising:
and storing the corresponding relation between the limb actions and the speech of the user according to the operation of the user on the setting interface.
4. The method of claim 3, wherein the text box is a plurality of text boxes, different ones of the plurality of text boxes for displaying different vocabularies; after the setting interface for setting the correspondence between the limb actions and the speech surgery is displayed, the method further comprises the following steps:
identifying a touch instruction of a user dragging any one of the text boxes;
and adjusting the sequence of the text boxes in the text boxes according to the touch instruction.
5. An electronic device control apparatus, comprising:
the identification module is used for identifying the first limb action of the user;
the judging module is used for judging whether the first limb action is a limb action corresponding to a first phone call, the first phone call is related to a wake-up instruction, and the first phone call does not indicate the operation after the electronic equipment is awakened;
the wake-up module is used for controlling the electronic equipment to enter a wake-up state if the judging module judges that the first limb movement is the limb movement corresponding to the first conversation;
a control module for:
identifying a second limb action of the user;
determining a second session based on the second limb movement, the second session being used to supplement the first session;
combining the first conversation and the second conversation into a third conversation according to the order of the limb actions of the user;
converting the third speech to an operation instruction which can be identified by the electronic equipment;
and controlling the electronic equipment by utilizing the operation instruction.
6. The apparatus as recited in claim 5, further comprising: and the setting module is used for acquiring the first conversation customized by the user before the first limb action of the user is identified.
7. The apparatus as recited in claim 6, further comprising: the display module is used for displaying a setting interface for setting the corresponding relation between the limb actions and the speech skills before the identification module identifies the first limb actions of the user, a text box is displayed on the setting interface, the text box is used for the user to customize the speech skills, and at least one of the following buttons is also displayed on the setting interface: a system limb action selection button for indicating the user to select a limb action preset by the system; a custom limb action selection button for instructing the user to select a pre-stored limb action; the user-defined limb action shooting button is used for indicating the user to start a camera of the electronic equipment and shooting limb actions;
the setting module is further used for storing the corresponding relation between the limb actions and the speech of the user according to the operation of the user on the setting interface.
8. The apparatus of claim 7, wherein the text box is a plurality of text boxes, different ones of the plurality of text boxes for displaying different vocabularies; the setting module is used for identifying a touch instruction of a user for dragging any one of the text boxes after the setting interface for setting the corresponding relation between the limb actions and the speech operation is displayed on the display module, and adjusting the sequence of the text boxes in the text boxes according to the touch instruction.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.
10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-4.
CN202010014503.XA 2020-01-07 2020-01-07 Electronic equipment control method and device Active CN111160318B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010014503.XA CN111160318B (en) 2020-01-07 2020-01-07 Electronic equipment control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010014503.XA CN111160318B (en) 2020-01-07 2020-01-07 Electronic equipment control method and device

Publications (2)

Publication Number Publication Date
CN111160318A CN111160318A (en) 2020-05-15
CN111160318B true CN111160318B (en) 2023-10-31

Family

ID=70561771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010014503.XA Active CN111160318B (en) 2020-01-07 2020-01-07 Electronic equipment control method and device

Country Status (1)

Country Link
CN (1) CN111160318B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966568A (en) * 2020-09-22 2020-11-20 北京百度网讯科技有限公司 Prompting method and device and electronic equipment

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100091863A (en) * 2009-02-10 2010-08-19 김용기 Home network system and intelligent home server control action method and tcp/ip using picture verification control and monitor method
CN202907113U (en) * 2012-06-14 2013-04-24 深圳市同洲电子股份有限公司 A TV set controlled by gesture identification
CN103092363A (en) * 2013-01-28 2013-05-08 上海斐讯数据通信技术有限公司 Mobile terminal with gesture input function and mobile terminal gesture input method
CN104424947A (en) * 2013-08-31 2015-03-18 上海能感物联网有限公司 Method for controlling household appliances in centralized way through foreign language voice of natural person
CN104503254A (en) * 2014-12-22 2015-04-08 珠海格力电器股份有限公司 User-defined control method and system for household appliances
CN106339067A (en) * 2015-07-06 2017-01-18 联想(北京)有限公司 Control method and electronic equipment
WO2017041360A1 (en) * 2015-09-09 2017-03-16 北京百度网讯科技有限公司 Smart home device control method, apparatus and device, and non-volatile computer storage medium
CN106527951A (en) * 2016-11-16 2017-03-22 珠海格力电器股份有限公司 Method and device for playing designated audio file and electronic equipment
CN107688329A (en) * 2017-08-21 2018-02-13 杭州古北电子科技有限公司 Intelligent home furnishing control method and intelligent home control system
CN108271078A (en) * 2018-03-07 2018-07-10 康佳集团股份有限公司 Pass through voice awakening method, smart television and the storage medium of gesture identification
CN108921101A (en) * 2018-07-04 2018-11-30 百度在线网络技术(北京)有限公司 Processing method, equipment and readable storage medium storing program for executing based on gesture identification control instruction
CN109283999A (en) * 2018-07-26 2019-01-29 杭州懒陈鑫网络科技有限公司 A kind of gesture interaction method and interactive system
CN109727596A (en) * 2019-01-04 2019-05-07 北京市第一〇一中学 Control the method and remote controler of remote controler
KR20190084202A (en) * 2017-12-18 2019-07-16 네이버 주식회사 Method and system for controlling artificial intelligence device using plurality wake up word
KR20190106921A (en) * 2019-08-30 2019-09-18 엘지전자 주식회사 Communication robot and method for operating the same
CN110600016A (en) * 2019-09-20 2019-12-20 北京市律典通科技有限公司 File pushing method and device
CN110597959A (en) * 2019-09-17 2019-12-20 北京百度网讯科技有限公司 Text information extraction method and device and electronic equipment
CN110647274A (en) * 2019-08-15 2020-01-03 华为技术有限公司 Interface display method and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211001A (en) * 2019-05-17 2019-09-06 深圳追一科技有限公司 A kind of hotel assistant customer service system, data processing method and relevant device

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100091863A (en) * 2009-02-10 2010-08-19 김용기 Home network system and intelligent home server control action method and tcp/ip using picture verification control and monitor method
CN202907113U (en) * 2012-06-14 2013-04-24 深圳市同洲电子股份有限公司 A TV set controlled by gesture identification
CN103092363A (en) * 2013-01-28 2013-05-08 上海斐讯数据通信技术有限公司 Mobile terminal with gesture input function and mobile terminal gesture input method
CN104424947A (en) * 2013-08-31 2015-03-18 上海能感物联网有限公司 Method for controlling household appliances in centralized way through foreign language voice of natural person
CN104503254A (en) * 2014-12-22 2015-04-08 珠海格力电器股份有限公司 User-defined control method and system for household appliances
CN106339067A (en) * 2015-07-06 2017-01-18 联想(北京)有限公司 Control method and electronic equipment
WO2017041360A1 (en) * 2015-09-09 2017-03-16 北京百度网讯科技有限公司 Smart home device control method, apparatus and device, and non-volatile computer storage medium
CN106527951A (en) * 2016-11-16 2017-03-22 珠海格力电器股份有限公司 Method and device for playing designated audio file and electronic equipment
CN107688329A (en) * 2017-08-21 2018-02-13 杭州古北电子科技有限公司 Intelligent home furnishing control method and intelligent home control system
KR20190084202A (en) * 2017-12-18 2019-07-16 네이버 주식회사 Method and system for controlling artificial intelligence device using plurality wake up word
CN108271078A (en) * 2018-03-07 2018-07-10 康佳集团股份有限公司 Pass through voice awakening method, smart television and the storage medium of gesture identification
CN108921101A (en) * 2018-07-04 2018-11-30 百度在线网络技术(北京)有限公司 Processing method, equipment and readable storage medium storing program for executing based on gesture identification control instruction
CN109283999A (en) * 2018-07-26 2019-01-29 杭州懒陈鑫网络科技有限公司 A kind of gesture interaction method and interactive system
CN109727596A (en) * 2019-01-04 2019-05-07 北京市第一〇一中学 Control the method and remote controler of remote controler
CN110647274A (en) * 2019-08-15 2020-01-03 华为技术有限公司 Interface display method and equipment
KR20190106921A (en) * 2019-08-30 2019-09-18 엘지전자 주식회사 Communication robot and method for operating the same
CN110597959A (en) * 2019-09-17 2019-12-20 北京百度网讯科技有限公司 Text information extraction method and device and electronic equipment
CN110600016A (en) * 2019-09-20 2019-12-20 北京市律典通科技有限公司 File pushing method and device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
AI Vision: Smart speaker design and implementation with object detection custom skill and advanced voice interaction capability;B. Sudharsan等;《2019 11th International Conference on Advanced Computing (ICoAC)》;97-102 *
Home Appliances Control Based on Hand Motion Gesture;D. Chaitanya等;《International Journal of Emerging Technologies in Engineering Research (IJETER)》;第5卷(第11期);110-114 *
Research on Active Interaction Design for Smart Speakers Agent of Home Service Robot;Qin, J.等;《Design, User Experience, and Usability. User Experience in Advanced Technological Environments. HCII 2019. Lecture Notes in Computer Science()》;第11584卷;253–263 *
一个迎宾机器人软件系统的设计与实现;黄国炎;《中国优秀硕士学位论文全文数据库:信息科技辑》(第12期);1-65 *
一种基于手势识别的智能设备控制系统的设计;邓巧茵;《计算技术与自动化》;第36卷(第2期);63-67 *
一种联合语音与手势识别的增强现实人机交互系统;赵天驰等;《河北工程技术高等专科学校学报》(第3期);42-46 *

Also Published As

Publication number Publication date
CN111160318A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN111261159B (en) Information indication method and device
CN112669831B (en) Voice recognition control method and device, electronic equipment and readable storage medium
CN112581946B (en) Voice control method, voice control device, electronic equipment and readable storage medium
CN110620844B (en) Program starting method, device, equipment and storage medium
KR102685523B1 (en) The apparatus for processing user voice input
CN110557699B (en) Intelligent sound box interaction method, device, equipment and storage medium
US11972761B2 (en) Electronic device for sharing user-specific voice command and method for controlling same
CN108476339B (en) Remote control method and terminal
CN112825013A (en) Control method and device of terminal equipment
CN110768877B (en) Voice control instruction processing method and device, electronic equipment and readable storage medium
JP7204804B2 (en) Smart rearview mirror interaction method, device, electronic device and storage medium
CN110601933A (en) Control method, device and equipment of Internet of things equipment and storage medium
WO2021051588A1 (en) Data processing method and apparatus, and apparatus used for data processing
KR20210033873A (en) Speech recognition control method, apparatus, electronic device and readable storage medium
KR20200045851A (en) Electronic Device and System which provides Service based on Voice recognition
CN111160318B (en) Electronic equipment control method and device
KR20210116897A (en) Method for contolling external device based on voice and electronic device thereof
CN111736799A (en) Voice interaction method, device, equipment and medium based on man-machine interaction
KR20210038278A (en) Speech control method and apparatus, electronic device, and readable storage medium
CN110908638A (en) Operation flow creating method and electronic equipment
CN112466300B (en) Interaction method, electronic device, intelligent device and readable storage medium
CN111901482B (en) Function control method and device, electronic equipment and readable storage medium
CN111243585B (en) Control method, device and equipment under multi-user scene and storage medium
CN111327756A (en) Operation guiding method of terminal and terminal
CN107293298B (en) Voice control system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210519

Address after: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing

Applicant after: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd.

Applicant after: Shanghai Xiaodu Technology Co.,Ltd.

Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing

Applicant before: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant