WO2021244059A1 - Interaction method and device, earphone, and server - Google Patents

Interaction method and device, earphone, and server Download PDF

Info

Publication number
WO2021244059A1
WO2021244059A1 PCT/CN2021/074916 CN2021074916W WO2021244059A1 WO 2021244059 A1 WO2021244059 A1 WO 2021244059A1 CN 2021074916 W CN2021074916 W CN 2021074916W WO 2021244059 A1 WO2021244059 A1 WO 2021244059A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
recognition result
user
voice recognition
voice
Prior art date
Application number
PCT/CN2021/074916
Other languages
French (fr)
Chinese (zh)
Inventor
崔文华
赵楠
Original Assignee
北京搜狗智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京搜狗智能科技有限公司 filed Critical 北京搜狗智能科技有限公司
Publication of WO2021244059A1 publication Critical patent/WO2021244059A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present invention relates to the technical field of electronic equipment, in particular to an interactive method, an interactive device, a headset and a server.
  • the embodiment of the present invention provides an interaction method, an interaction device, a headset, and a server.
  • the embodiment of the present invention discloses an interaction method, which is applied to a headset, the headset is communicatively connected with a server, the headset has an interactive assistant, and the method includes:
  • the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
  • the headset has a gravity sensor
  • the acquiring the user status includes:
  • the invoking the interactive assistant to recommend songs according to the user status includes:
  • the recommended song is a song searched by the server that matches the user's status.
  • the invoking the interactive assistant to play the song according to the user status includes:
  • the preset song after the sound effect adjustment is the sound effect determined by the server to match the user status, and the preset song Adjusted to the sound effect.
  • the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
  • Invoke the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
  • the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result includes:
  • the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result includes:
  • the interactive assistant is called to recognize and record a target voice from the user's voice according to the voice recognition result, or obtain and play the recorded target voice according to the voice recognition result.
  • it also includes:
  • it also includes:
  • it also includes:
  • it also includes:
  • the interactive assistant When the trigger condition of the preset reminder event is met, the interactive assistant is called to obtain and play the memo information corresponding to the preset reminder event.
  • the acquiring and playing the pre-device forget information according to the voice recognition result includes:
  • it also includes:
  • label information is generated for the memo information.
  • the acquiring and playing the pre-device forget information according to the voice recognition result includes:
  • the interactive assistant is called to find and play the pre-device forget information that matches the target tag information.
  • the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
  • the invoking the interactive assistant to generate and play a reply sentence matching the dialogue sentence includes:
  • the interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence.
  • the headset has an orientation sensor, and the acquiring user orientation information includes:
  • the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
  • the interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information, the dialogue sentence and the user's geographic location information.
  • the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
  • the navigation query information includes the user location information and the dialogue sentence;
  • the reply sentence for voice navigation sent by the server is received and played.
  • the reply sentence for voice navigation is generated by the server according to the user location information and the dialog sentence query.
  • the embodiment of the present invention discloses an interaction method, which is applied to a server, the server is in communication connection with a headset, the headset has an interactive assistant, and the method includes:
  • the server receives the user voice sent by the headset, and recognizes the user voice to obtain a voice recognition result
  • the voice recognition result is sent to the earphone, and the earphone is used to call the interactive assistant to perform an interactive operation according to the voice recognition result.
  • it also includes:
  • the headset is used to call the interactive assistant to recommend the recommended song to the user.
  • it also includes:
  • the preset song is adjusted to the sound effect, and the preset song with the adjusted sound effect is sent to the earphone, and the earphone is used for invoking the interactive assistant to play the preset song with the adjusted sound effect.
  • it also includes:
  • the recognizing and recording information from the user's voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset includes:
  • the recognizing and recording information from the user voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset includes:
  • a target voice is recognized and recorded from the user voice, or a recorded target voice is acquired and sent to the headset according to the voice recognition result.
  • it also includes:
  • it also includes:
  • the earphone For the reminder event to the earphone, the earphone is used to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
  • it also includes:
  • label information is generated for the memo information.
  • the acquiring pre-device forget information according to the voice recognition result and sending it to the headset includes:
  • the voice recognition result includes memo information that characterizes the need to find target tag information, search for pre-device forget information that matches the target tag information and send it to the headset.
  • it also includes:
  • a reply sentence matching the dialogue sentence is generated and sent to the earphone, and the earphone is used to play the reply sentence matching the dialogue sentence.
  • the generating and sending a reply sentence matching the conversation sentence to the headset includes:
  • a reply sentence for voice navigation is generated and sent to the headset.
  • the embodiment of the present invention discloses an interactive device, which is applied to a headset, the headset is in communication connection with a server, the headset has an interactive assistant, and the device includes:
  • a voice recognition result obtaining module configured to send a user voice to the server, and obtain a voice recognition result of the user voice from the server;
  • the first interaction module is used to invoke the interaction assistant to perform an interaction operation according to the voice recognition result.
  • the first interaction module includes:
  • the wake-up module is used to wake up the interactive assistant according to the voice recognition result
  • User status acquisition module used to acquire user status
  • the song interaction module is used to call the interactive assistant to recommend songs or play songs according to the user status.
  • the headset has a gravity sensor
  • the user status acquisition module is configured to acquire sensor data detected by the gravity sensor, and determine the user status according to the sensor data.
  • the song interaction module is configured to send the user status to the server; receive recommended songs sent by the server and call the interactive assistant to recommend to the user, the recommended songs are searched by the server Of songs that match the user's status.
  • the song interaction module is configured to send the user status to the server; receive a preset song with adjusted sound effects sent by the server and call the interactive assistant to play; It is assumed that the song is a sound effect that is determined by the server to match the user status, and a preset song is adjusted to the sound effect.
  • the first interaction module includes:
  • the first recording interaction module is configured to call the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
  • the first recording interaction module is configured to call the interactive assistant to recognize and record memo information from the voice recognition result according to the voice recognition result, or to obtain a pre-device memo according to the voice recognition result Message and play.
  • the first recording interaction module is configured to call the interactive assistant to recognize and record a target voice from the user voice according to the voice recognition result, or obtain the recorded target voice according to the voice recognition result And play.
  • it also includes:
  • the first recorded information transmission module is configured to send recorded information to the server, and/or obtain and record the recorded information of the server.
  • it also includes:
  • the first reminder event generating module is configured to generate a reminder event for the memo information after the memo information is recorded.
  • it also includes:
  • the first reminder event obtaining module is configured to obtain preset reminder events for the memo information from the server.
  • it also includes:
  • the first reminder event triggering module is configured to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
  • the first recording interaction module is configured to search for information that matches the voice recognition result from pre-device forget information; call the interactive assistant to play the information that matches the voice recognition result.
  • it also includes:
  • the first semantic analysis module is configured to obtain a semantic analysis result obtained by the server performing semantic analysis on the memo information
  • the first label generating module is configured to generate label information for the memo information according to the semantic analysis result.
  • the first recording interaction module is configured to call the interactive assistant to find pre-device forget information that matches the target tag information when the voice recognition result includes a requirement to find memo information with target tag information And play.
  • the first interaction module includes:
  • the first dialogue sentence obtaining module is used to obtain the dialogue sentence from the speech recognition result
  • the first dialogue interaction module is used to call the interactive assistant to generate and play a reply sentence matching the dialogue sentence.
  • the first dialogue interaction module is configured to obtain user location information; call the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence.
  • the headset has an orientation sensor
  • the first dialogue interaction module is configured to acquire user orientation information detected by the orientation sensor.
  • the first dialogue interaction module is configured to obtain user geographic location information; call the interactive assistant to generate a reply statement for voice navigation based on the user location information, the dialogue sentence, and the user geographic location information And play.
  • the first dialog interaction module is configured to send navigation query information to the server; the navigation query information includes the user location information and the dialog sentence; and receives a voice navigation message sent by the server The answer sentence for voice navigation is generated by the server according to the user location information and the dialogue sentence query.
  • the embodiment of the present invention discloses an interactive device, which is applied to a server, the server is in communication connection with a headset, the headset has an interactive assistant, and the device includes:
  • the voice recognition module is used to receive the user voice sent by the headset and recognize the user voice to obtain a voice recognition result
  • the voice recognition result sending module is configured to send the voice recognition result to the earphone, and the earphone is used to call the interactive assistant to perform an interactive operation according to the voice recognition result.
  • it also includes:
  • the first user status acquiring module is configured to acquire the user status detected by the headset
  • the first song sending module is used to find a recommended song matching the user's status and send it to the earphone; the earphone is used to call the interactive assistant to recommend the recommended song to the user.
  • it also includes:
  • the second user status acquiring module is configured to acquire the user status detected by the headset
  • a sound effect determining module configured to determine a sound effect matching the user status
  • the second song sending module is configured to adjust a preset song to the sound effect and send the preset song with the adjusted sound effect to the earphone.
  • the earphone is used to call the interactive assistant to play all the adjusted sound effects. Describe the preset songs.
  • it also includes:
  • the recorded information processing module is used for recognizing and recording information from the user’s voice according to the voice recognition result, or acquiring the recorded information according to the voice recognition result and sending it to the earphone, and the earphone is used for playing The recorded information.
  • the record information processing module includes:
  • the memo information processing module is configured to recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain pre-device memo information according to the voice recognition result and send it to the headset.
  • the record information processing module includes:
  • the voice processing module is configured to recognize and record a target voice from the user voice according to the voice recognition result, or obtain a recorded target voice according to the voice recognition result and send it to the headset.
  • it also includes:
  • the second record information transmission module is configured to send the information recognized from the user's voice to the earphone, and/or obtain and record the information recorded by the earphone.
  • it also includes:
  • the second reminder event generating module is used to generate a reminder event for the memo information after the memo information is recorded;
  • the reminder event sending module is configured to send the reminder event to the earphone, and the earphone is used to call the interactive assistant to obtain the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met, and Play.
  • it also includes:
  • the second semantic analysis module is used to perform semantic analysis on the memo information to obtain the semantic analysis result
  • the second label generating module is used to generate label information for the memo information according to the semantic analysis result.
  • the memo information processing module is configured to, when the voice recognition result includes the memo information that characterizes the need to find target tag information, search for pre-device forget information that matches the target tag information and send it to the headset send.
  • it also includes:
  • a dialogue sentence obtaining module which is used to obtain a dialogue sentence from the speech recognition result
  • the reply sentence sending module is used for generating a reply sentence matching the dialogue sentence and sending it to the earphone, and the earphone is used for playing the reply sentence matching the dialogue sentence.
  • the reply sentence sending module is configured to obtain the user location information detected by the headset; according to the user location information and the dialogue sentence, generate a reply sentence for voice navigation and send it to the headset.
  • the embodiment of the present invention discloses a headset, including a memory, and one or more programs.
  • One or more programs are stored in the memory and configured to be executed by one or more processors.
  • the above program contains instructions for the following operations:
  • the interactive assistant is invoked to perform interactive operations according to the voice recognition result.
  • the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
  • the headset has a gravity sensor
  • the acquiring the user status includes:
  • the invoking the interactive assistant to recommend songs according to the user status includes:
  • the recommended song is a song searched by the server that matches the user's status.
  • the invoking the interactive assistant to play the song according to the user status includes:
  • the preset song after the sound effect adjustment is the sound effect determined by the server to match the user status, and the preset song Adjusted to the sound effect.
  • the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
  • Invoke the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
  • the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result includes:
  • the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result includes:
  • the interactive assistant is called to recognize and record a target voice from the user's voice according to the voice recognition result, or obtain and play the recorded target voice according to the voice recognition result.
  • the interactive assistant When the trigger condition of the preset reminder event is met, the interactive assistant is called to obtain and play the memo information corresponding to the preset reminder event.
  • the acquiring and playing the pre-device forget information according to the voice recognition result includes:
  • label information is generated for the memo information.
  • the acquiring and playing the pre-device forget information according to the voice recognition result includes:
  • the interactive assistant is called to find and play the pre-device forget information that matches the target tag information.
  • the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
  • the invoking the interactive assistant to generate and play a reply sentence matching the dialogue sentence includes:
  • the interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence.
  • the headset has an orientation sensor, and the acquiring user orientation information includes:
  • the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
  • the interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information, the dialogue sentence and the user's geographic location information.
  • the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
  • the navigation query information includes the user location information and the dialogue sentence;
  • the reply sentence for voice navigation sent by the server is received and played.
  • the reply sentence for voice navigation is generated by the server according to the user location information and the dialog sentence query.
  • the embodiment of the present invention discloses a server, including a memory, and one or more programs.
  • One or more programs are stored in the memory and configured to be executed by one or more processors.
  • the above program contains instructions for the following operations:
  • the voice recognition result is sent to the earphone, and the earphone is used to call the interactive assistant to perform an interactive operation according to the voice recognition result.
  • the headset is used to call the interactive assistant to recommend the recommended song to the user.
  • the preset song is adjusted to the sound effect, and the preset song with the adjusted sound effect is sent to the earphone, and the earphone is used for invoking the interactive assistant to play the preset song with the adjusted sound effect.
  • the recognizing and recording information from the user's voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset includes:
  • the recognizing and recording information from the user's voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset includes:
  • a target voice is recognized and recorded from the user voice, or a recorded target voice is acquired and sent to the headset according to the voice recognition result.
  • the earphone For the reminder event to the earphone, the earphone is used to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
  • label information is generated for the memo information.
  • the acquiring pre-device forget information according to the voice recognition result and sending it to the headset includes:
  • the voice recognition result includes memo information that characterizes the need to find target tag information, search for pre-device forget information that matches the target tag information and send it to the headset.
  • a reply sentence matching the dialogue sentence is generated and sent to the earphone, and the earphone is used to play the reply sentence matching the dialogue sentence.
  • the generating and sending a reply sentence matching the conversation sentence to the headset includes:
  • a reply sentence for voice navigation is generated and sent to the headset.
  • the embodiment of the present invention discloses a computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the interaction method as described in any of the above step.
  • the embodiment of the present invention also discloses a computer program, including computer readable code, which when the computer readable code runs on a computing processing device, causes the computing processing device to execute the interaction method according to any one of the foregoing.
  • the headset can obtain the voice recognition result of the user's voice from the server, and the interactive assistant of the headset can perform interactive operations according to the voice recognition result of the user's voice.
  • the user does not need to use the hand to operate the headset, and realize the multiple interactive functions of the headset. .
  • FIG. 1 is a flowchart of the steps of Embodiment 1 of an interactive method of the present invention
  • Embodiment 2 is a flowchart of the steps of Embodiment 2 of an interactive method of the present invention
  • FIG. 3 is a flowchart of the steps of Embodiment 3 of an interactive method of the present invention.
  • FIG. 4 is a flowchart of the steps of Embodiment 4 of an interactive method of the present invention.
  • FIG. 5 is a flowchart of steps in Embodiment 5 of an interactive method of the present invention.
  • Embodiment 1 of an interactive device of the present invention
  • FIG. 7 is a structural block diagram of Embodiment 2 of an interactive device of the present invention.
  • Embodiment 8 is a structural block diagram of Embodiment 3 of an interactive device of the present invention.
  • Embodiment 4 of an interactive device of the present invention.
  • Fig. 10 is a structural block diagram of a headset for interaction according to an exemplary embodiment
  • Fig. 11 is a schematic structural diagram of a server for interaction shown in another exemplary embodiment.
  • Embodiment 1 of an interactive method of the present invention.
  • the method is applied to a headset, the headset is in communication with a server, and the headset has an interactive assistant.
  • the method may specifically include the following step:
  • Step 101 The headset sends a user voice to the server, and obtains a voice recognition result of the user voice from the server.
  • Earphones are portable electronic devices that people often use in daily life.
  • the earphones can have playback functions, sound pickup functions, and communication functions. Users can use headphones to listen to songs or communicate on the phone.
  • the server has a voice recognition function, which can perform voice recognition on the user's voice collected by the headset.
  • Step 102 Invoke the interactive assistant to perform an interactive operation according to the voice recognition result.
  • the headset is installed with an interactive assistant.
  • the interactive assistant can be a program installed in the headset to run independently, and can provide a variety of interactive functions.
  • the interactive assistant can perform interactive operations based on the voice recognition results to implement various interactive functions of the headset.
  • the headset can be communicatively connected with the mobile terminal, and the mobile terminal can follow an application program APP matched with the interactive assistant of the headset, and the user can control the interactive assistant on the interface of the APP.
  • the interactive assistant can be awakened in a specific way, for example, a specific voice command. Some interactive functions of the interactive assistant can be executed after being awakened, and some interactive functions can be executed without being awakened.
  • the headset can obtain the voice recognition result of the user's voice from the server, and the interactive assistant of the headset can perform interactive operations according to the voice recognition result of the user's voice.
  • the user does not need to use the hand to operate the headset, and realize the multiple interactive functions of the headset. .
  • the interactive function of the earphone may include a song recommendation function.
  • step flow chart of the second embodiment of an interactive method of the present invention.
  • the method is applied to a headset, the headset is in communication with a server, and the headset has an interactive assistant.
  • the method may specifically include the following step:
  • Step 201 The headset sends a user voice to the server, and obtains a voice recognition result of the user voice from the server.
  • Step 202 Wake up the interactive assistant according to the voice recognition result.
  • the interactive assistant When the voice recognition result includes information indicating that the user needs to find a suitable song or adjust the sound effect of the song, the interactive assistant is awakened to recommend the song to the user.
  • speech recognition results include: "play a song suitable for running” or "switch to sound effects suitable for running”.
  • Step 203 Obtain the user status.
  • the user status that is, the status of the user, can include sitting status, walking status, running status, driving status, riding status, and so on.
  • the headset may have a gravity sensor that can detect the state of the user.
  • the step of acquiring the user status may include: acquiring sensor data detected by the gravity sensor, and determining the user status according to the sensor data.
  • an algorithm for detecting the user's state can be used to determine the user's state based on the sensing data of the gravity sensor.
  • Step 204 Invoke the interactive assistant to recommend songs or play songs according to the user status.
  • the interactive assistant can recommend songs to the user according to the user's status, and the user can determine whether to play the song; it can also directly play the song that adapts to the user's status.
  • the interactive assistant can search for songs with tags that match the user's status. For example, when the user's status is “running”, songs with the tag "strong rhythm” can be recommended.
  • the interactive assistant may be called to find recommended songs that match the user's status and recommend them to the user.
  • the server can look up recommended songs.
  • the step of invoking the interactive assistant to recommend songs based on the user status may include: sending the user status to the server; receiving recommended songs sent by the server and invoking the interactive assistant to recommend to the user, the recommendation The song is a song found by the server that matches the state of the user.
  • the interactive assistant can play the song according to the matched sound effect according to the user's status. Specifically, the interactive assistant can adjust the sound effect of the song through the sound effect algorithm.
  • the sound effect algorithm can adjust the song into multiple types of sound effects, such as "quiet”, “far away”, “rock” and so on. When the user's state is sitting still, the preset song can be adjusted to a "quiet” sound effect.
  • the interactive assistant may adjust the sound effect of the song
  • the step of invoking the interactive assistant to recommend a song according to the user state may include: invoking the interactive assistant to determine the sound effect matching the user state, and Adjusting a preset song to the sound effect; playing the preset song after adjusting the sound effect.
  • the server may adjust the sound effects of the songs
  • the step of invoking the interactive assistant to recommend songs according to the user status may include: sending the user status to the server; receiving the information sent by the server
  • the preset song after the sound effect adjustment is called and played by the interactive assistant; the preset song after the sound effect adjustment is the sound effect determined by the server to match the user status, and the preset song is adjusted to the sound effect to obtain .
  • the preset song may be the currently playing song in the playlist of the headset.
  • the earphone can obtain the voice recognition result of the user's voice from the server, wake up the interactive assistant according to the voice recognition result and obtain the user status, and call the interactive assistant to recommend or play songs according to the user status.
  • the embodiment of the present invention realizes that the user does not need to use the hand to operate the earphone, and the earphone recommends songs or plays the song, which simplifies the user's operation process.
  • the interactive function of the headset may include an information recording interactive function.
  • step flow chart of the third embodiment of an interactive method of the present invention is applied to a headset, the headset is in communication with a server, and the headset has an interactive assistant.
  • the method may specifically include the following step:
  • Step 301 The headset sends a user voice to the server, and obtains a voice recognition result of the user voice from the server.
  • Step 302 Invoke the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
  • the headset can send the recorded information to the server, and can also obtain and record the recorded information of the server.
  • the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result includes: invoking The interactive assistant recognizes and records a target voice from the user's voice according to the voice recognition result, or obtains and plays the recorded target voice according to the voice recognition result.
  • the interactive assistant can recognize the target voice from the user's voice. For example, if the user says "record a sound", the interactive assistant can record the user's voice collected later.
  • the user can set the recording method to filter the sounds that need to be recorded. For example, in a meeting, if the user wants to record the voice of each person participating in the meeting, the headset can record the omnidirectional user voice. In the classroom, if the user wants to record the teacher's speech, the headset can record the user's speech in the specified direction.
  • the interactive assistant can search for the target voice and play it.
  • the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result includes: invoking The interactive assistant recognizes and records memo information from the voice recognition result according to the voice recognition result, or obtains and plays the pre-device memo information according to the voice recognition result.
  • the interactive assistant can extract relevant content from the voice recognition result and record it as memo information.
  • the interactive assistant supports command and talk.
  • the form of the voice command can be "write down + memo content”
  • the voice recognition result is "write down for me, and have a meeting with the sales tomorrow in the meeting room on the third floor at 10 o'clock in the morning.”
  • "Take it down for me” means that you need to record the memo information, and the interactive assistant will record the "meeting with the sales in the meeting room on the third floor at 10 o'clock tomorrow morning” as the memo information.
  • the content of the memo information that needs to be recorded can also be "Help me a few things”, “Help me keep an account”, “Help my parents keep a record”, “Help me keep a parking space” and so on.
  • Voice recognition can be trained in advance Model to identify the need to record memo information.
  • the interactive assistant can obtain the pre-device memo information and play it.
  • the interactive assistant may obtain the memo information locally from the headset, or obtain the memo information from the server.
  • the step of acquiring and playing the pre-device forget information according to the voice recognition result may include: searching for information matching the voice recognition result from the pre-device forget information; invoking the interactive assistant to play The information that matches the voice recognition result.
  • the interactive assistant can retrieve specific information from the memo information, for example, retrieve information that matches the keyword, time, location, category, and other information in the user's voice. For example, the voice recognition result is "What time is the meeting with sales tomorrow morning?", and the interactive assistant answers "10 o'clock”.
  • the earphone can perform semantic analysis on the memo information to obtain the semantic analysis result.
  • the headset can also be obtained from the server, and the semantic analysis result obtained by the semantic analysis of the memo information by the server.
  • the headset can generate tag information for the memo information according to the semantic analysis result.
  • the server may use a natural language understanding algorithm to perform semantic analysis on the speech recognition result to obtain the semantic analysis result.
  • the step of acquiring and playing the pre-device forget information according to the voice recognition result may include: when the voice recognition result includes a requirement to find memo information with target tag information, calling the The interactive assistant finds and plays the pre-device forgetting information that matches the target tag information.
  • the label information may include information such as classification labels and attribute labels.
  • the interactive assistant can generate corresponding label information according to the semantic analysis result. Based on semantic analysis, the interactive assistant can categorize or add tags to the memo. For example, after the user finishes saying "Take it down for me, I will have a meeting with the sales in the meeting room on the third floor at 10 am tomorrow.” Based on semantic analysis, this memo Information belongs to the to-do category. In addition to searching by keywords, users can also search by tag information. For example, the user’s voice is "What do I have to-do tomorrow?", the interactive assistant searches for information with the tag "to-do" Memo information.
  • the interactive assistant may generate a reminder event for the memo information after recording the memo information. It can also be obtained from the server, and the reminder event generated by the server for the memo information.
  • the reminder event may include reminder content and trigger conditions.
  • the reminder content is memo information
  • the trigger condition is a condition for triggering the reminder event, such as reaching a set time.
  • the interactive assistant when the trigger condition of the preset reminder event is met, the interactive assistant may be called to obtain and play the memo information corresponding to the preset reminder event. For example, if the trigger condition of a reminder event is "time reaches 9:45", the headset will play the memo information corresponding to the reminder event, and the headset will remind you by voice "You have arranged a meeting with sales in the third floor meeting room at 10 o'clock, please advance arrange”.
  • the headset can obtain the voice recognition result of the user's voice from the server; call the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
  • the embodiment of the present invention realizes that the user does not need to use a hand to operate the earphone, and can record information or play recorded information through the earphone, which simplifies the user's operation process.
  • the interactive function of the headset may include a question-and-answer interactive function.
  • Embodiment 4 there is shown a step flow chart of Embodiment 4 of an interactive method of the present invention.
  • the method is applied to a headset, the headset is in communication with a server, and the headset has an interactive assistant.
  • the method may specifically include the following step:
  • Step 401 The headset sends a user voice to the server, and obtains a voice recognition result of the user voice from the server.
  • Step 402 Obtain a dialogue sentence from the speech recognition result.
  • the interactive assistant of the headset can have a conversation with the user, and can obtain the conversation sentence of the user from the voice recognition result.
  • Step 403 Invoke the interactive assistant to generate a reply sentence matching the dialogue sentence and play it.
  • the interactive assistant can generate and play matching reply sentences according to the user's dialogue sentences, so as to conduct voice questions and answers with the user.
  • the step of invoking the interactive assistant to generate and play a reply sentence matching the dialogue sentence may include: obtaining user location information; invoking the interactive assistant according to the user location information and The dialogue sentence is described, and the reply sentence for voice navigation is generated and played.
  • the user orientation information refers to the frontal orientation of the user.
  • the headset may have an orientation sensor.
  • the orientation sensor can detect the user's orientation information in real time.
  • the interactive assistant can obtain the user's position information detected by the position sensor.
  • the step of invoking the interaction assistant to generate and play a reply sentence for voice navigation based on the user location information and the dialogue sentence may include: obtaining user geographic location information; invoking the interaction The assistant generates and plays a reply sentence for voice navigation according to the user's location information, the dialogue sentence and the user's geographic location information.
  • the headset can obtain the current user's geographic location information, for example, the headset detects the current user's geographic location information.
  • the headset can be communicatively connected with the mobile device.
  • the mobile device may have positioning capabilities, for example, by setting a GPS module, or positioning by communicating with a base station, the headset may obtain the current user's geographic location information detected by the mobile device.
  • the step of invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence may include: sending navigation query information to the server;
  • the navigation query information includes the user location information and the dialogue sentence;
  • the reply sentence for voice navigation sent by the server is received and played, and the reply sentence for voice navigation is determined by the server according to the user position.
  • Information and the dialogue sentence query is generated.
  • the server can generate a reply sentence for voice navigation according to the user's location information, dialogue sentence, and user's geographic location information query, and then send the reply sentence back to the headset.
  • the interactive assistant of the headset can interact with the user for voice navigation.
  • the navigation voice can be played based on the real-time user location information, real-time user geographic location information, and the voice continuously spoken by the user.
  • Interactive assistant You walk about 200 meters in the direction of the red tall building.
  • the real-time user location information it is determined that the user starts to walk toward the red tall building at this time.
  • Interactive assistant there is a barber shop under the red tall building. When you see the barber shop, turn right
  • the real-time user location information it is determined that the user starts to turn right at this time.
  • the real-time user location information and user location information it is determined that the user walks to the intersection and turns left.
  • the user is determined to move forward.
  • Interactive assistant "Meizhou Dongpo" is on your left, navigation is complete, I wish you a pleasant meal.
  • the headset can obtain the voice recognition result of the user's voice from the server; obtain the dialogue sentence from the voice recognition result, and then call the interactive assistant to generate and play the reply sentence matching the dialogue sentence.
  • the embodiment of the present invention realizes that the user does not need to use the hand to operate the earphone, and the earphone can conduct question and answer according to the user's voice, which simplifies the user's operation process.
  • Embodiment 5 there is shown a step flow chart of Embodiment 5 of an interactive method of the present invention.
  • the method is applied to a server, and the server communicates with a headset, and the headset has an interactive assistant.
  • the method includes:
  • Step 501 the server receives the user voice sent by the headset, and recognizes the user voice to obtain a voice recognition result.
  • the server has a voice recognition function, and can perform voice recognition on the user's voice collected by the headset to obtain a voice recognition result.
  • Step 502 Send the voice recognition result to the headset, where the headset is used to invoke the interactive assistant to perform an interactive operation according to the voice recognition result.
  • the headset is equipped with an interactive assistant, which can be a program installed on the headset, and can provide a variety of interactive functions. After the headset receives the voice recognition result sent by the server, the interactive assistant can be called to perform interactive operations according to the voice recognition result, so as to realize various interactive functions of the headset.
  • an interactive assistant which can be a program installed on the headset, and can provide a variety of interactive functions. After the headset receives the voice recognition result sent by the server, the interactive assistant can be called to perform interactive operations according to the voice recognition result, so as to realize various interactive functions of the headset.
  • the interactive assistant can be awakened in a specific way, for example, a specific voice command. Some interactive functions of the interactive assistant can be executed after being awakened, and some interactive functions can be executed without being awakened.
  • the server can send the voice recognition result of the user's voice to the headset, and the interactive assistant of the headset can perform interactive operations according to the voice recognition result of the user's voice.
  • the user does not need to use the hand to operate the headset to realize various interactions of the headset.
  • the interactive function of the headset may include a song recommendation function.
  • the headset can call the interactive assistant to obtain the user status according to the voice recognition result.
  • the voice recognition result includes information indicating that the user needs to find a suitable song or adjust the sound effect of the song
  • the interactive assistant of the headset may request the server to recommend the song or adjust the sound effect of the song.
  • the server may obtain the user status detected by the headset; find a recommended song matching the user status, and send it to the headset, so that the headset calls the interactive assistant to recommend the recommended song to the user.
  • the server can obtain the user status detected by the headset; determine the sound effect matching the user status; adjust the preset song to the sound effect, and send the preset song with the adjusted sound effect to the headset so that the headset can call The interactive assistant plays the preset songs after adjusting the sound effects.
  • the interactive function of the headset may include an information recording interactive function.
  • the server may recognize and record information from the user's voice according to the voice recognition result, or obtain the recorded information according to the voice recognition result and send it to the headset, so that the headset can play the recorded information.
  • the server can send the recorded information to the headset, or obtain and record the recorded information of the headset.
  • the server may recognize and record the target voice from the user's voice according to the voice recognition result, or obtain the recorded target voice according to the voice recognition result and send it to the headset, so that the headset can play the target voice.
  • the server may recognize and record the memo information from the voice recognition result according to the voice recognition result, or obtain the pre-device forget information according to the voice recognition result and send it to the headset, so that the headset can play the pre-device forget information.
  • the server may generate a reminder event for the memo information after recording the memo information; the reminder event to the headset, so that when the trigger condition of the preset reminder event is met, the headset can call the interactive assistant to obtain the preset reminder event The corresponding memo information and play.
  • the server can also perform semantic analysis on the memo information to obtain a semantic analysis result; according to the semantic analysis result, generate label information for the memo information.
  • the server can search for the pre-device forget information that matches the target tag information and send it to the headset, so that the headset can play the pre-device forget information that matches the target tag information .
  • the interactive function of the headset may include a question-and-answer interactive function.
  • the server obtains the dialog sentence from the voice recognition result; generates a reply sentence matching the dialog sentence and sends the headset to make the headset play the reply sentence matching the dialog sentence to realize the question and answer interaction with the user.
  • the question and answer interactive function may include voice navigation
  • the server may obtain the user's location information detected by the headset; according to the user's location information and the dialogue sentence, a reply sentence for voice navigation is generated and sent to the headset, so that the headset is used for playback. Reply sentence for voice navigation.
  • Embodiment 1 of an interactive device of the present invention is applied to a headset.
  • the headset is in communication with a server.
  • the headset has an interactive assistant.
  • the device may specifically include the following Module:
  • the voice recognition result obtaining module 601 is configured to send a user voice to the server, and obtain a voice recognition result of the user voice from the server;
  • the first interaction module 602 is configured to invoke the interaction assistant to perform an interaction operation according to the voice recognition result.
  • Embodiment 2 of an interactive device of the present invention is applied to a headset.
  • the headset is in communication with a server.
  • the headset has an interactive assistant.
  • the device may specifically include the following Module:
  • the voice recognition result obtaining module 701 is configured to send a user voice to the server, and obtain a voice recognition result of the user voice from the server;
  • the first interaction module 702 is configured to call the interaction assistant to perform an interaction operation according to the voice recognition result.
  • the first interaction module 702 may include:
  • the wake-up module 7021 is used to wake up the interactive assistant according to the voice recognition result
  • the user status acquisition module 7022 is used to acquire the user status
  • the song interaction module 7023 is used to call the interactive assistant to recommend songs or play songs according to the user status.
  • the headset has a gravity sensor
  • the user state acquisition module 7022 is configured to acquire sensor data detected by the gravity sensor, and determine the user status according to the sensor data.
  • the song interaction module 7023 is configured to send the user status to the server; to receive the recommended song sent by the server and call the interactive assistant to recommend to the user, the recommended song is the reason The song found by the server matches the state of the user.
  • the song interaction module 7023 is configured to send the user status to the server; receive the preset song after the sound effect adjustment sent by the server and call the interactive assistant to play it; after the sound effect adjustment
  • the preset song is a sound effect determined by the server to match the user status, and the preset song is adjusted to the sound effect.
  • the first interaction module 702 may include:
  • the first recording interaction module 7024 is configured to call the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
  • the first recording interaction module 7024 is configured to call the interactive assistant to recognize and record memo information from the voice recognition result according to the voice recognition result, or according to the voice recognition result Get the pre-device forgotten information and play it.
  • the first recording interaction module 7024 is used to call the interactive assistant to recognize and record the target voice from the user voice according to the voice recognition result, or to obtain the recorded voice according to the voice recognition result. Record the target voice and play it.
  • the interaction device may further include:
  • the first recorded information transmission module 703 is configured to send recorded information to the server, and/or obtain and record the recorded information of the server.
  • the interaction device may further include:
  • the first reminder event generating module 704 is configured to generate a reminder event for the memo information after the memo information is recorded.
  • the interaction device may further include:
  • the first reminder event obtaining module 705 is configured to obtain a preset reminder event for the memo information from the server.
  • the interaction device may further include:
  • the first reminder event triggering module 706 is configured to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
  • the first recording interaction module 7024 is configured to search for information that matches the voice recognition result from the pre-device forget information; call the interactive assistant to play the voice recognition result that matches Information.
  • the interaction device may further include:
  • the first semantic analysis module 707 is configured to obtain a semantic analysis result obtained by the server performing semantic analysis on the memo information
  • the first tag generation module 708 is configured to generate tag information for the memo information according to the semantic analysis result.
  • the first record interaction module 7024 is configured to call the interactive assistant to find the memo information that matches the target tag information when the voice recognition result includes the memo information that characterizes the need to find target tag information. Pre-device forgets the information and plays it.
  • the first interaction module 702 may include:
  • the first dialogue sentence obtaining module 7025 is configured to obtain the dialogue sentence from the voice recognition result
  • the first dialogue interaction module 7026 is configured to call the interactive assistant to generate and play a reply sentence matching the dialogue sentence.
  • the first dialogue interaction module 7026 is used to obtain user location information; call the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence .
  • the headset has an orientation sensor
  • the first dialogue interaction module 7026 is configured to obtain user orientation information detected by the orientation sensor.
  • the first dialogue interaction module 7026 is configured to obtain user geographic location information; call the interactive assistant to generate voice information based on the user location information, the dialogue sentence, and the user geographic location information Navigate the reply sentence and play it.
  • the first dialog interaction module 7026 is configured to send navigation query information to the server; the navigation query information includes the user location information and the dialog sentence; and receives the information sent by the server The reply sentence used for voice navigation is played, and the reply sentence used for voice navigation is generated by the server according to the user location information and the dialog sentence query.
  • Embodiment 3 of an interactive device of the present invention is applied to a server, and the server is in communication connection with a headset.
  • the headset has an interactive assistant.
  • the device may specifically include the following Module:
  • the voice recognition module 801 is configured to receive the user voice sent by the headset, and recognize the user voice to obtain a voice recognition result;
  • the voice recognition result sending module 802 is configured to send the voice recognition result to the earphone, and the earphone is used to call the interactive assistant to perform an interactive operation according to the voice recognition result.
  • Embodiment 4 of an interactive device of the present invention is applied to a server, and the server is in communication connection with a headset.
  • the headset has an interactive assistant.
  • the device may specifically include the following Module:
  • the voice recognition module 901 is configured to receive the user voice sent by the headset, and recognize the user voice to obtain a voice recognition result;
  • the voice recognition result sending module 902 is configured to send the voice recognition result to the earphone, and the earphone is used to call the interactive assistant to perform an interactive operation according to the voice recognition result.
  • the interaction device may further include:
  • the first user status acquiring module 903 is configured to acquire the user status detected by the headset;
  • the first song sending module 904 is configured to find a recommended song matching the user's status and send it to the earphone; the earphone is used to call the interactive assistant to recommend the recommended song to the user.
  • the interaction device may further include:
  • the second user status acquiring module 905 is configured to acquire the user status detected by the headset
  • the sound effect determining module 906 is configured to determine a sound effect matching the user state
  • the second song sending module 907 is configured to adjust a preset song to the sound effect, and send the preset song with the adjusted sound effect to the earphone, and the earphone is used to call the interactive assistant to play the adjusted sound effect The preset song.
  • the interaction device may further include:
  • the recorded information processing module 908 is configured to recognize and record information from the user's voice according to the voice recognition result, or obtain recorded information according to the voice recognition result and send it to the earphone.
  • the earphone is used for Play the recorded information.
  • the record information processing module 908 may include:
  • the memo information processing module 9081 is configured to recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain pre-device memo information according to the voice recognition result and send it to the headset.
  • the record information processing module 908 may include:
  • the voice processing module 9082 is configured to recognize and record a target voice from the user voice according to the voice recognition result, or obtain a recorded target voice according to the voice recognition result and send it to the headset.
  • the interaction device may further include:
  • the second recorded information transmission module 909 is configured to send the information recognized from the user's voice to the earphone, and/or obtain and record the information recorded by the earphone.
  • the interaction device may further include:
  • the second reminder event generating module 910 is configured to generate a reminder event for the memo information after the memo information is recorded;
  • the reminder event sending module 911 is configured to send the reminder event to the earphone, and the earphone is used to call the interactive assistant to obtain the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met And play.
  • the interaction device may further include:
  • the second semantic analysis module 912 is configured to perform semantic analysis on the memo information to obtain a semantic analysis result
  • the second label generating module 913 is configured to generate label information for the memo information according to the semantic analysis result.
  • the memo information processing module 9081 is configured to search for pre-device memo information matching the target tag information when the voice recognition result includes the memo information that characterizes the need to find target tag information. Send to the headset.
  • the interaction device may further include:
  • the dialogue sentence obtaining module 914 is configured to obtain the dialogue sentence from the speech recognition result
  • the reply sentence sending module 915 is configured to generate a reply sentence matching the dialogue sentence and send it to the earphone, and the earphone is used to play the reply sentence matching the dialogue sentence.
  • the reply sentence sending module 915 is used to obtain the user location information detected by the headset; according to the user location information and the dialogue sentence, generate a reply sentence for voice navigation and send the answer to the The headset is sent.
  • the device embodiments described above are merely illustrative.
  • the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in One place, or it can be distributed to multiple network modules. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement without creative work.
  • Fig. 10 is a structural block diagram showing a headset 1000 for interaction according to an exemplary embodiment.
  • the headset 1000 may include one or more of the following components: a processing component 1002, a memory 1004, a power component 1006, a multimedia component 1008, an audio component 1010, an input/output (I/O) interface 1012, a sensor component 1014, And communication component 1016.
  • the processing component 1002 generally controls the overall operations of the headset 1000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing element 1002 may include one or more processors 1020 to execute instructions to complete all or part of the steps of the foregoing method.
  • the processing component 1002 may include one or more modules to facilitate the interaction between the processing component 1002 and other components.
  • the processing component 1002 may include a multimedia module to facilitate the interaction between the multimedia component 1008 and the processing component 1002.
  • the memory 1004 is configured to store various types of data to support operations on the headset 1000. Examples of these data include instructions for any application or method operated on the headset 1000, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 1004 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable and Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic Disk Magnetic Disk or Optical Disk.
  • the power component 1006 provides power to various components of the earphone 1000.
  • the power component 1006 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the headset 1000.
  • the multimedia component 1008 includes a screen that provides an output interface between the headset 1000 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the headset 1000 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 1010 is configured to output and/or input audio signals.
  • the audio component 1010 includes a microphone (MIC), and when the headset 1000 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 1004 or transmitted via the communication component 1016.
  • the audio component 1010 further includes a speaker for outputting audio signals.
  • the I/O interface 1012 provides an interface between the processing component 1002 and a peripheral interface module.
  • the above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 1014 includes one or more sensors for providing the earphone 1000 with various aspects of state evaluation.
  • the sensor component 1014 can detect the on/off state of the headset 1000 and the relative positioning of the components.
  • the component is the display and the keypad of the headset 1000.
  • the sensor component 1014 can also detect the position change of the headset 1000 or a component of the headset 1000. , The presence or absence of contact between the user and the headset 1000, the orientation or acceleration/deceleration of the headset 1000, and the temperature change of the headset 1000.
  • the sensor assembly 1014 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
  • the sensor component 1014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 1014 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 1016 is configured to facilitate wired or wireless communication between the headset 1000 and other devices.
  • the headset 1000 can be connected to a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
  • the communication component 1014 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 1014 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the headset 1000 may be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component is used to implement the above method.
  • ASIC application specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic component is used to implement the above method.
  • non-transitory computer-readable storage medium including instructions, such as the memory 1004 including instructions, and the foregoing instructions may be executed by the processor 1020 of the headset 1000 to complete the foregoing method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • a headset includes a memory and one or more programs, where one or more programs are stored in the memory, and are configured to be executed by one or more processors, including for performing Instructions for the following operations:
  • the interactive assistant is invoked to perform interactive operations according to the voice recognition result.
  • the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
  • the headset has a gravity sensor
  • the acquiring the user status includes:
  • the invoking the interactive assistant to recommend songs according to the user status includes:
  • the recommended song is a song searched by the server that matches the user's status.
  • the invoking the interactive assistant to play the song according to the user status includes:
  • the preset song after the sound effect adjustment is the sound effect determined by the server to match the user status, and the preset song Adjusted to the sound effect.
  • the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
  • Invoke the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
  • the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result includes:
  • the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result includes:
  • the interactive assistant is called to recognize and record a target voice from the user's voice according to the voice recognition result, or obtain and play the recorded target voice according to the voice recognition result.
  • it also includes instructions for performing the following operations: sending the recorded information to the server, and/or obtaining and recording the recorded information by the server.
  • it further includes an instruction to perform the following operations: after the memo information is recorded, a reminder event for the memo information is generated.
  • it further includes an instruction to perform the following operations: obtaining a preset reminder event for the memo information from the server.
  • it also includes an instruction to perform the following operations: when the trigger condition of the preset reminder event is met, call the interactive assistant to obtain the memo information corresponding to the preset reminder event and play it.
  • the acquiring and playing the pre-device forget information according to the voice recognition result includes:
  • it also includes instructions for performing the following operations: obtaining a semantic analysis result obtained by the server performing semantic analysis on the memo information;
  • label information is generated for the memo information.
  • the acquiring and playing the pre-device forget information according to the voice recognition result includes:
  • the interactive assistant is called to find and play the pre-device forget information that matches the target tag information.
  • the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
  • the invoking the interactive assistant to generate and play a reply sentence matching the dialogue sentence includes:
  • the interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence.
  • the headset has an orientation sensor, and the acquiring user orientation information includes:
  • the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
  • the interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information, the dialogue sentence and the user's geographic location information.
  • the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
  • the navigation query information includes the user location information and the dialogue sentence;
  • the reply sentence for voice navigation sent by the server is received and played.
  • the reply sentence for voice navigation is generated by the server according to the user location information and the dialog sentence query.
  • a non-transitory computer-readable storage medium when instructions in the storage medium are executed by a processor of the headset, the headset can execute an interactive method, the method comprising:
  • the interactive assistant is invoked to perform interactive operations according to the voice recognition result.
  • the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
  • the headset has a gravity sensor
  • the acquiring the user status includes:
  • the invoking the interactive assistant to recommend songs according to the user status includes:
  • the recommended song is a song searched by the server that matches the user's status.
  • the invoking the interactive assistant to play the song according to the user status includes:
  • the preset song after the sound effect adjustment is the sound effect determined by the server to match the user status, and the preset song Adjusted to the sound effect.
  • the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
  • Invoke the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
  • the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result includes:
  • the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result includes:
  • the interactive assistant is called to recognize and record a target voice from the user's voice according to the voice recognition result, or obtain and play the recorded target voice according to the voice recognition result.
  • the method further includes: sending the recorded information to the server, and/or obtaining and recording the recorded information of the server.
  • the method further includes: after the memo information is recorded, generating a reminder event for the memo information.
  • the method further includes: obtaining a preset reminder event for the memo information from the server.
  • the method further includes: when the trigger condition of the preset reminder event is met, invoking the interactive assistant to obtain and play the memo information corresponding to the preset reminder event.
  • the acquiring and playing the pre-device forget information according to the voice recognition result includes:
  • the method further includes: obtaining a semantic analysis result obtained by the server performing semantic analysis on the memo information;
  • label information is generated for the memo information.
  • the acquiring and playing the pre-device forget information according to the voice recognition result includes:
  • the interactive assistant is called to find and play the pre-device forget information that matches the target tag information.
  • the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
  • the invoking the interactive assistant to generate and play a reply sentence matching the dialogue sentence includes:
  • the interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence.
  • the headset has an orientation sensor, and the acquiring user orientation information includes:
  • the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
  • the interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information, the dialogue sentence and the user's geographic location information.
  • the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
  • the navigation query information includes the user location information and the dialogue sentence;
  • the reply sentence for voice navigation sent by the server is received and played.
  • the reply sentence for voice navigation is generated by the server according to the user location information and the dialog sentence query.
  • Fig. 11 is a schematic structural diagram showing a server 1100 for interaction according to another exemplary embodiment of the present invention.
  • the server may have relatively large differences due to different configurations or performance, and may include one or more central processing units (CPU) 1122 (for example, one or more processors) and memory 1132, one or more A storage medium 1130 (for example, one or a storage device with a large amount of storage) for storing application programs 1142 or data 1144.
  • the memory 1132 and the storage medium 1130 may be short-term storage or persistent storage.
  • the program stored in the storage medium 1130 may include one or more modules (not shown in the figure), and each module may include a series of command operations on the server.
  • the central processing unit 1122 may be configured to communicate with the storage medium 1130, and execute a series of instruction operations in the storage medium 1130 on the server.
  • the server may also include one or more power supplies 1126, one or more wired or wireless network interfaces 1150, one or more input and output interfaces 1158, one or more keyboards 1156, and/or one or more operating systems 1141, For example, Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM and so on.
  • a server including a memory and one or more programs, wherein one or more programs are stored in the memory, and are configured to be executed by one or more processors, including for performing Instructions for the following operations:
  • the voice recognition result is sent to the earphone, and the earphone is used for invoking an interactive assistant to perform an interactive operation according to the voice recognition result.
  • it also includes instructions for performing the following operations: acquiring the user status detected by the headset;
  • the headset is used to call the interactive assistant to recommend the recommended song to the user.
  • it also includes instructions for performing the following operations: acquiring the user status detected by the headset;
  • the preset song is adjusted to the sound effect, and the preset song with the adjusted sound effect is sent to the earphone, and the earphone is used for invoking the interactive assistant to play the preset song with the adjusted sound effect.
  • it further includes instructions for performing the following operations: recognizing and recording information from the user voice according to the voice recognition result, or obtaining recorded information according to the voice recognition result and sending it to the headset, The earphone is used to play the recorded information.
  • the recognizing and recording information from the user's voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset includes:
  • the recognizing and recording information from the user's voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset includes:
  • a target voice is recognized and recorded from the user voice, or a recorded target voice is acquired and sent to the headset according to the voice recognition result.
  • it further includes instructions for performing the following operations: sending information recognized from the user's voice to the headset, and/or acquiring and recording the information recorded by the headset.
  • it also includes instructions for performing the following operations: after recording the memo information, generating a reminder event for the memo information;
  • the earphone For the reminder event to the earphone, the earphone is used to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
  • it also contains instructions for performing the following operations: performing semantic analysis on the memo information to obtain a semantic analysis result;
  • label information is generated for the memo information.
  • the acquiring pre-device forget information according to the voice recognition result and sending it to the headset includes:
  • the voice recognition result includes memo information that characterizes the need to find target tag information, search for pre-device forget information that matches the target tag information and send it to the headset.
  • it also includes instructions for performing the following operations: obtaining a dialogue sentence from the voice recognition result;
  • a reply sentence matching the dialogue sentence is generated and sent to the earphone, and the earphone is used to play the reply sentence matching the dialogue sentence.
  • the generating and sending a reply sentence matching the conversation sentence to the headset includes:
  • a reply sentence for voice navigation is generated and sent to the headset.
  • a non-transitory computer-readable storage medium When instructions in the storage medium are executed by a processor of a server, the server can execute an interactive method, the method including:
  • the voice recognition result is sent to the earphone, and the earphone is used for invoking an interactive assistant to perform an interactive operation according to the voice recognition result.
  • it also includes:
  • the headset is used to call the interactive assistant to recommend the recommended song to the user.
  • it also includes:
  • the preset song is adjusted to the sound effect, and the preset song with the adjusted sound effect is sent to the earphone, and the earphone is used for invoking the interactive assistant to play the preset song with the adjusted sound effect.
  • it also includes:
  • the recognizing and recording information from the user's voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset includes:
  • the recognizing and recording information from the user's voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset includes:
  • a target voice is recognized and recorded from the user voice, or a recorded target voice is acquired and sent to the headset according to the voice recognition result.
  • it also includes:
  • it also includes:
  • the earphone For the reminder event to the earphone, the earphone is used to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
  • it also includes:
  • label information is generated for the memo information.
  • the acquiring pre-device forget information according to the voice recognition result and sending it to the headset includes:
  • the voice recognition result includes memo information that characterizes the need to find target tag information, search for pre-device forget information that matches the target tag information and send it to the headset.
  • it also includes:
  • a reply sentence matching the dialogue sentence is generated and sent to the earphone, and the earphone is used to play the reply sentence matching the dialogue sentence.
  • the generating and sending a reply sentence matching the conversation sentence to the headset includes:
  • a reply sentence for voice navigation is generated and sent to the headset.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing terminal equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the instruction device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing terminal equipment, so that a series of operation steps are executed on the computer or other programmable terminal equipment to produce computer-implemented processing, so that the computer or other programmable terminal equipment
  • the instructions executed above provide steps for implementing functions specified in a flow or multiple flows in the flowchart and/or a block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Provided in embodiments of the present invention are an interaction method, an interaction device, an earphone, and a server. The earphone is is engaged in a communication connection with a server and has an interaction assistant. The method comprises: the earphone sending voice of a user to the server, and acquiring, from the server, a result of voice recognition performed on the voice of the user; and calling the interaction assistant to perform an interaction operation according to the result of the voice recognition. The method does not require a user to manually operate an earphone to realize multiple interaction functions of the earphone.

Description

一种交互方法、装置、耳机和服务器Interactive method, device, earphone and server
本申请要求在中国申请的申请号为202010507540.4、申请日为2020年06月05日发明名称为“一种交互方法、装置、耳机和服务器”的发明专利申请的全部优先权。This application claims all the priority of the invention patent application whose application number is 202010507540.4 and the application date is June 5, 2020 in China. The invention title is "an interactive method, device, headset and server".
技术领域Technical field
本发明涉及电子设备技术领域,特别是涉及一种交互方法、一种交互装置、一种耳机和服务器。The present invention relates to the technical field of electronic equipment, in particular to an interactive method, an interactive device, a headset and a server.
背景技术Background technique
随着科学技术的不断发展,电子技术也得到了飞速的发展,电子设备的种类也越来越多,人们越来越习惯在生活中使用多种电子设备。With the continuous development of science and technology, electronic technology has also developed rapidly, and there are more and more types of electronic devices, and people are becoming more and more accustomed to using a variety of electronic devices in their lives.
但是在一些场景中,电子设备的操作仍然存在一些限制,不利于用户对电子设备进行操作。例如,在驾驶汽车、骑行、跑步等场景中,用户不方便对手持式的电子设备进行操作。However, in some scenarios, there are still some restrictions on the operation of the electronic device, which is not conducive to the operation of the electronic device by the user. For example, in scenes such as driving a car, cycling, or running, it is inconvenient for users to operate handheld electronic devices.
发明内容Summary of the invention
本发明实施例提供了一种交互方法、一种交互装置、一种耳机和一种服务器。The embodiment of the present invention provides an interaction method, an interaction device, a headset, and a server.
本发明实施例公开了一种交互方法,应用于耳机,所述耳机与服务器通信连接,所述耳机具有交互助手,所述方法包括:The embodiment of the present invention discloses an interaction method, which is applied to a headset, the headset is communicatively connected with a server, the headset has an interactive assistant, and the method includes:
所述耳机向所述服务器发送用户语音,并从所述服务器获取所述用户语音的语音识别结果;Sending, by the headset, a user voice to the server, and obtaining a voice recognition result of the user voice from the server;
调用所述交互助手根据所述语音识别结果执行交互操作。Invoke the interactive assistant to perform an interactive operation according to the voice recognition result.
可选地,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:Optionally, the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
根据所述语音识别结果唤醒所述交互助手;Wake up the interactive assistant according to the voice recognition result;
获取用户状态;Get user status;
调用所述交互助手根据所述用户状态推荐歌曲或播放歌曲。Invoke the interactive assistant to recommend songs or play songs according to the user status.
可选地,所述耳机具有重力传感器,所述获取用户状态,包括:Optionally, the headset has a gravity sensor, and the acquiring the user status includes:
获取所述重力传感器检测的传感数据,根据所述传感数据确定用户状态。Acquire the sensing data detected by the gravity sensor, and determine the user state according to the sensing data.
可选地,所述调用所述交互助手根据所述用户状态推荐歌曲,包括:Optionally, the invoking the interactive assistant to recommend songs according to the user status includes:
向所述服务器发送所述用户状态;Sending the user status to the server;
接收所述服务器发送的推荐歌曲并调用所述交互助手向用户推荐,所述推荐歌曲为由所述服务器查找的与所述用户状态匹配的歌曲。Receive the recommended song sent by the server and call the interactive assistant to recommend to the user, the recommended song is a song searched by the server that matches the user's status.
可选地,所述调用所述交互助手根据所述用户状态播放歌曲,包括:Optionally, the invoking the interactive assistant to play the song according to the user status includes:
向所述服务器发送所述用户状态;Sending the user status to the server;
接收所述服务器发送的音效调整后的预设歌曲并调用所述交互助手播放;音效调整后的所述预设歌曲为由所述服务器确定与所述用户状态匹配的音效,并将预设歌曲调整为所述音效得到。Receive the preset song after adjusting the sound effect sent by the server and call the interactive assistant to play it; the preset song after the sound effect adjustment is the sound effect determined by the server to match the user status, and the preset song Adjusted to the sound effect.
可选地,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:Optionally, the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放。Invoke the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
可选地,所述调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放,包括:Optionally, the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result, includes:
调用所述交互助手根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并播放。Invoke the interactive assistant to recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain and play the pre-device memo information according to the voice recognition result.
可选地,所述调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放,包括:Optionally, the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result, includes:
调用所述交互助手根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并播放。The interactive assistant is called to recognize and record a target voice from the user's voice according to the voice recognition result, or obtain and play the recorded target voice according to the voice recognition result.
可选地,还包括:Optionally, it also includes:
向所述服务器发送已记录的信息,和/或,获取所述服务器已记录的信息并记录。Send the recorded information to the server, and/or obtain and record the recorded information of the server.
可选地,还包括:Optionally, it also includes:
在记录备忘信息之后,生成针对所述备忘信息的提醒事件。After the memo information is recorded, a reminder event for the memo information is generated.
可选地,还包括:Optionally, it also includes:
从所述服务器获取针对备忘信息的预设提醒事件。Obtain a preset reminder event for the memo information from the server.
可选地,还包括:Optionally, it also includes:
当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。When the trigger condition of the preset reminder event is met, the interactive assistant is called to obtain and play the memo information corresponding to the preset reminder event.
可选地,所述根据所述语音识别结果获取预设备忘信息并播放,包括:Optionally, the acquiring and playing the pre-device forget information according to the voice recognition result includes:
从预设备忘信息中查找与所述语音识别结果匹配的信息;Searching for information that matches the voice recognition result from the pre-device forget information;
调用所述交互助手播放所述与所述语音识别结果匹配的信息。Invoke the interactive assistant to play the information matching the voice recognition result.
可选地,还包括:Optionally, it also includes:
获取由所述服务器对备忘信息进行语义分析得到的语义分析结果;Obtaining a semantic analysis result obtained by the server performing semantic analysis on the memo information;
根据语义分析结果,对所述备忘信息生成标签信息。According to the semantic analysis result, label information is generated for the memo information.
可选地,所述根据所述语音识别结果获取预设备忘信息并播放,包括:Optionally, the acquiring and playing the pre-device forget information according to the voice recognition result includes:
当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,调用所述交互助手查找与目标标签信息匹配的预设备忘信息并播放。When the voice recognition result includes the memo information that characterizes the need to find target tag information, the interactive assistant is called to find and play the pre-device forget information that matches the target tag information.
可选地,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:Optionally, the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
从所述语音识别结果中获取对话语句;Obtaining a dialogue sentence from the speech recognition result;
调用所述交互助手生成与所述对话语句匹配的答复语句并播放。Invoke the interactive assistant to generate a reply sentence matching the dialogue sentence and play it.
可选地,所述调用所述交互助手,生成与所述对话语句匹配的答复语句并播放,包括:Optionally, the invoking the interactive assistant to generate and play a reply sentence matching the dialogue sentence includes:
获取用户方位信息;Obtain user location information;
调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放。The interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence.
可选地,所述耳机具有方位传感器,所述获取用户方位信息,包括:Optionally, the headset has an orientation sensor, and the acquiring user orientation information includes:
获取所述方位传感器检测的用户方位信息。Obtain user position information detected by the position sensor.
可选地,所述调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放,包括:Optionally, the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
获取用户地理位置信息;Obtain user geographic location information;
调用所述交互助手根据所述用户方位信息、所述对话语句和用户地理位置信息,生成用于语音导航的答复语句并播放。The interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information, the dialogue sentence and the user's geographic location information.
可选地,所述调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放,包括:Optionally, the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
向所述服务器发送导航查询信息;所述导航查询信息包括所述用户方位信息和所述对话语句;Sending navigation query information to the server; the navigation query information includes the user location information and the dialogue sentence;
接收所述服务器发送的用于语音导航的答复语句并播放,所述用于语音导航的答复语句由所述服务器根据所述用户方位信息和所述对话语句查询生成。The reply sentence for voice navigation sent by the server is received and played. The reply sentence for voice navigation is generated by the server according to the user location information and the dialog sentence query.
本发明实施例公开了一种交互方法,应用于服务器,所述服务器与耳机通信连接,所述耳机具有交互助手,所述方法包括:The embodiment of the present invention discloses an interaction method, which is applied to a server, the server is in communication connection with a headset, the headset has an interactive assistant, and the method includes:
所述服务器接收所述耳机发送的用户语音,并对用户语音进行识别,得到语音识别结果;The server receives the user voice sent by the headset, and recognizes the user voice to obtain a voice recognition result;
向所述耳机发送所述语音识别结果,所述耳机用于调用所述交互助手根据所述语音识别结果执行交互操作。The voice recognition result is sent to the earphone, and the earphone is used to call the interactive assistant to perform an interactive operation according to the voice recognition result.
可选地,还包括:Optionally, it also includes:
获取所述耳机检测的用户状态;Acquiring the user status detected by the headset;
查找与所述用户状态匹配的推荐歌曲,并向所述耳机发送;所述耳机用于调用所述交互助手向用户推荐所述推荐歌曲。Find a recommended song matching the user's state and send it to the headset; the headset is used to call the interactive assistant to recommend the recommended song to the user.
可选地,还包括:Optionally, it also includes:
获取所述耳机检测的用户状态;Acquiring the user status detected by the headset;
确定与所述用户状态匹配的音效;Determining a sound effect matching the user status;
将预设歌曲调整为所述音效,并向所述耳机发送调整音效后的所述预设歌曲,所述耳机用于调用所述交互助手播放调整音效后的所述预设歌曲。The preset song is adjusted to the sound effect, and the preset song with the adjusted sound effect is sent to the earphone, and the earphone is used for invoking the interactive assistant to play the preset song with the adjusted sound effect.
可选地,还包括:Optionally, it also includes:
根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,所述耳机用于播放所述已记录的信息。Recognize and record information from the user's voice according to the voice recognition result, or obtain and send the recorded information to the earphone according to the voice recognition result, and the earphone is used to play the recorded information.
可选地,所述根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,包括:Optionally, the recognizing and recording information from the user's voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset, includes:
根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并向所述耳机发送。Recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain pre-device memo information according to the voice recognition result and send it to the headset.
可选地,所述根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据 所述语音识别结果获取已记录的信息并向所述耳机发送,包括:Optionally, the recognizing and recording information from the user voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset, includes:
根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并向所述耳机发送。According to the voice recognition result, a target voice is recognized and recorded from the user voice, or a recorded target voice is acquired and sent to the headset according to the voice recognition result.
可选地,还包括:Optionally, it also includes:
向所述耳机发送从所述用户语音识别的信息,和/或,获取所述耳机已记录的信息并记录。Send the information recognized from the user's voice to the earphone, and/or obtain and record the information recorded by the earphone.
可选地,还包括:Optionally, it also includes:
在记录备忘信息之后,生成针对所述备忘信息的提醒事件;After the memo information is recorded, a reminder event for the memo information is generated;
向所述耳机所述提醒事件,所述耳机用于当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。For the reminder event to the earphone, the earphone is used to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
可选地,还包括:Optionally, it also includes:
对备忘信息进行语义分析,得到语义分析结果;Perform semantic analysis on the memo information to obtain the semantic analysis result;
根据语义分析结果,对所述备忘信息生成标签信息。According to the semantic analysis result, label information is generated for the memo information.
可选地,所述根据所述语音识别结果获取预设备忘信息并向所述耳机发送,包括:Optionally, the acquiring pre-device forget information according to the voice recognition result and sending it to the headset includes:
当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,查找与目标标签信息匹配的预设备忘信息并向所述耳机发送。When the voice recognition result includes memo information that characterizes the need to find target tag information, search for pre-device forget information that matches the target tag information and send it to the headset.
可选地,还包括:Optionally, it also includes:
从所述语音识别结果中获取对话语句;Obtaining a dialogue sentence from the speech recognition result;
生成与所述对话语句匹配的答复语句并向所述耳机发送,所述耳机用于播放所述与所述对话语句匹配的答复语句。A reply sentence matching the dialogue sentence is generated and sent to the earphone, and the earphone is used to play the reply sentence matching the dialogue sentence.
可选地,所述生成与所述对话语句匹配的答复语句并向所述耳机发送,包括:Optionally, the generating and sending a reply sentence matching the conversation sentence to the headset includes:
获取所述耳机检测的用户方位信息;Acquiring user location information detected by the headset;
根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并向所述耳机发送。According to the user location information and the dialogue sentence, a reply sentence for voice navigation is generated and sent to the headset.
本发明实施例公开了一种交互装置,应用于耳机,所述耳机与服务器通信连接,所述耳机具有交互助手,所述装置包括:The embodiment of the present invention discloses an interactive device, which is applied to a headset, the headset is in communication connection with a server, the headset has an interactive assistant, and the device includes:
语音识别结果获取模块,用于向所述服务器发送用户语音,并从所述服务器获取所述用户语音的语音识别结果;A voice recognition result obtaining module, configured to send a user voice to the server, and obtain a voice recognition result of the user voice from the server;
第一交互模块,用于调用所述交互助手根据所述语音识别结果执行交互操作。The first interaction module is used to invoke the interaction assistant to perform an interaction operation according to the voice recognition result.
可选地,所述第一交互模块包括:Optionally, the first interaction module includes:
唤醒模块,用于根据所述语音识别结果唤醒所述交互助手;The wake-up module is used to wake up the interactive assistant according to the voice recognition result;
用户状态获取模块,用于获取用户状态;User status acquisition module, used to acquire user status;
歌曲交互模块,用于调用所述交互助手根据所述用户状态推荐歌曲或播放歌曲。The song interaction module is used to call the interactive assistant to recommend songs or play songs according to the user status.
可选地,所述耳机具有重力传感器,所述用户状态获取模块,用于获取所述重力传感器检测的传感数据,根据所述传感数据确定用户状态。Optionally, the headset has a gravity sensor, and the user status acquisition module is configured to acquire sensor data detected by the gravity sensor, and determine the user status according to the sensor data.
可选地,所述歌曲交互模块,用于向所述服务器发送所述用户状态;接收所述服务器发送的推荐歌曲并调用所述交互助手向用户推荐,所述推荐歌曲为由所述服务器查找的与所述用户状态匹配的歌曲。Optionally, the song interaction module is configured to send the user status to the server; receive recommended songs sent by the server and call the interactive assistant to recommend to the user, the recommended songs are searched by the server Of songs that match the user's status.
可选地,所述歌曲交互模块,用于向所述服务器发送所述用户状态;接收所述服务器发送的音效调整后的预设歌曲并调用所述交互助手播放;音效调整后的所述预设歌曲为由所述服务器确定与所述用户状态匹配的音效,并将预设歌曲调整为所述音效得到。Optionally, the song interaction module is configured to send the user status to the server; receive a preset song with adjusted sound effects sent by the server and call the interactive assistant to play; It is assumed that the song is a sound effect that is determined by the server to match the user status, and a preset song is adjusted to the sound effect.
可选地,所述第一交互模块包括:Optionally, the first interaction module includes:
第一记录交互模块,用于调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放。The first recording interaction module is configured to call the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
可选地,所述第一记录交互模块,用于调用所述交互助手根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并播放。Optionally, the first recording interaction module is configured to call the interactive assistant to recognize and record memo information from the voice recognition result according to the voice recognition result, or to obtain a pre-device memo according to the voice recognition result Message and play.
可选地,所述第一记录交互模块,用于调用所述交互助手根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并播放。Optionally, the first recording interaction module is configured to call the interactive assistant to recognize and record a target voice from the user voice according to the voice recognition result, or obtain the recorded target voice according to the voice recognition result And play.
可选地,还包括:Optionally, it also includes:
第一记录信息传输模块,用于向所述服务器发送已记录的信息,和/或,获取所述服务器已记录的信息并记录。The first recorded information transmission module is configured to send recorded information to the server, and/or obtain and record the recorded information of the server.
可选地,还包括:Optionally, it also includes:
第一提醒事件生成模块,用于在记录备忘信息之后,生成针对所述备忘信息的提醒事件。The first reminder event generating module is configured to generate a reminder event for the memo information after the memo information is recorded.
可选地,还包括:Optionally, it also includes:
第一提醒事件获取模块,用于从所述服务器获取针对备忘信息的预设提醒事件。The first reminder event obtaining module is configured to obtain preset reminder events for the memo information from the server.
可选地,还包括:Optionally, it also includes:
第一提醒事件触发模块,用于当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。The first reminder event triggering module is configured to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
可选地,所述第一记录交互模块,用于从预设备忘信息中查找与所述语音识别结果匹配的信息;调用所述交互助手播放所述与所述语音识别结果匹配的信息。Optionally, the first recording interaction module is configured to search for information that matches the voice recognition result from pre-device forget information; call the interactive assistant to play the information that matches the voice recognition result.
可选地,还包括:Optionally, it also includes:
第一语义分析模块,用于获取由所述服务器对备忘信息进行语义分析得到的语义分析结果;The first semantic analysis module is configured to obtain a semantic analysis result obtained by the server performing semantic analysis on the memo information;
第一标签生成模块,用于根据语义分析结果,对所述备忘信息生成标签信息。The first label generating module is configured to generate label information for the memo information according to the semantic analysis result.
可选地,所述第一记录交互模块,用于当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,调用所述交互助手查找与目标标签信息匹配的预设备忘信息并播放。Optionally, the first recording interaction module is configured to call the interactive assistant to find pre-device forget information that matches the target tag information when the voice recognition result includes a requirement to find memo information with target tag information And play.
可选地,所述第一交互模块包括:Optionally, the first interaction module includes:
第一对话语句获取模块,用于从所述语音识别结果中获取对话语句;The first dialogue sentence obtaining module is used to obtain the dialogue sentence from the speech recognition result;
第一对话交互模块,用于调用所述交互助手生成与所述对话语句匹配的答复语句并播放。The first dialogue interaction module is used to call the interactive assistant to generate and play a reply sentence matching the dialogue sentence.
可选地,所述第一对话交互模块,用于获取用户方位信息;调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放。Optionally, the first dialogue interaction module is configured to obtain user location information; call the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence.
可选地,所述耳机具有方位传感器,所述第一对话交互模块,用于获取所述方位传感器检测的用户方位信息。Optionally, the headset has an orientation sensor, and the first dialogue interaction module is configured to acquire user orientation information detected by the orientation sensor.
可选地,所述第一对话交互模块,用于获取用户地理位置信息;调用所述交互助手根据所述用户方位信息、所述对话语句和用户地理位置信息,生成用于语音导航的答复语句并播放。Optionally, the first dialogue interaction module is configured to obtain user geographic location information; call the interactive assistant to generate a reply statement for voice navigation based on the user location information, the dialogue sentence, and the user geographic location information And play.
可选地,所述第一对话交互模块,用于向所述服务器发送导航查询信息;所述导航查询信息包括所述用户方位信息和所述对话语句;接收所述服务器发送的用于语音导航的答复语句并播放,所述用于语音导航的答复语句由所述服务器根据所述用户方位信息和所述对话语句查询生成。Optionally, the first dialog interaction module is configured to send navigation query information to the server; the navigation query information includes the user location information and the dialog sentence; and receives a voice navigation message sent by the server The answer sentence for voice navigation is generated by the server according to the user location information and the dialogue sentence query.
本发明实施例公开了一种交互装置,应用于服务器,所述服务器与耳机通信连接,所述耳机具有交互助手,所述装置包括:The embodiment of the present invention discloses an interactive device, which is applied to a server, the server is in communication connection with a headset, the headset has an interactive assistant, and the device includes:
语音识别模块,用于接收所述耳机发送的用户语音,并对用户语音进行识别,得到语音识别结果;The voice recognition module is used to receive the user voice sent by the headset and recognize the user voice to obtain a voice recognition result;
语音识别结果发送模块,用于向所述耳机发送所述语音识别结果,所述耳机用于调用所述交互助手根据所述语音识别结果执行交互操作。The voice recognition result sending module is configured to send the voice recognition result to the earphone, and the earphone is used to call the interactive assistant to perform an interactive operation according to the voice recognition result.
可选地,还包括:Optionally, it also includes:
第一用户状态获取模块,用于获取所述耳机检测的用户状态;The first user status acquiring module is configured to acquire the user status detected by the headset;
第一歌曲发送模块,用于查找与所述用户状态匹配的推荐歌曲,并向所述耳机发送;所述耳机用于调用所述交互助手向用户推荐所述推荐歌曲。The first song sending module is used to find a recommended song matching the user's status and send it to the earphone; the earphone is used to call the interactive assistant to recommend the recommended song to the user.
可选地,还包括:Optionally, it also includes:
第二用户状态获取模块,用于获取所述耳机检测的用户状态;The second user status acquiring module is configured to acquire the user status detected by the headset;
音效确定模块,用于确定与所述用户状态匹配的音效;A sound effect determining module, configured to determine a sound effect matching the user status;
第二歌曲发送模块,用于将预设歌曲调整为所述音效,并向所述耳机发送调整音效后的所述预设歌曲,所述耳机用于调用所述交互助手播放调整音效后的所述预设歌曲。The second song sending module is configured to adjust a preset song to the sound effect and send the preset song with the adjusted sound effect to the earphone. The earphone is used to call the interactive assistant to play all the adjusted sound effects. Describe the preset songs.
可选地,还包括:Optionally, it also includes:
记录信息处理模块,用于根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,所述耳机用于播放所述已记录的信息。The recorded information processing module is used for recognizing and recording information from the user’s voice according to the voice recognition result, or acquiring the recorded information according to the voice recognition result and sending it to the earphone, and the earphone is used for playing The recorded information.
可选地,所述记录信息处理模块包括:Optionally, the record information processing module includes:
备忘信息处理模块,用于根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并向所述耳机发送。The memo information processing module is configured to recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain pre-device memo information according to the voice recognition result and send it to the headset.
可选地,所述记录信息处理模块包括:Optionally, the record information processing module includes:
语音处理模块,用于根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并向所述耳机发送。The voice processing module is configured to recognize and record a target voice from the user voice according to the voice recognition result, or obtain a recorded target voice according to the voice recognition result and send it to the headset.
可选地,还包括:Optionally, it also includes:
第二记录信息传输模块,用于向所述耳机发送从所述用户语音识别的信息,和/或,获取所述耳机已记录的信息并记录。The second record information transmission module is configured to send the information recognized from the user's voice to the earphone, and/or obtain and record the information recorded by the earphone.
可选地,还包括:Optionally, it also includes:
第二提醒事件生成模块,用于在记录备忘信息之后,生成针对所述备忘信息的提醒事件;The second reminder event generating module is used to generate a reminder event for the memo information after the memo information is recorded;
提醒事件发送模块,用于向所述耳机所述提醒事件,所述耳机用于当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。The reminder event sending module is configured to send the reminder event to the earphone, and the earphone is used to call the interactive assistant to obtain the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met, and Play.
可选地,还包括:Optionally, it also includes:
第二语义分析模块,用于对备忘信息进行语义分析,得到语义分析结果;The second semantic analysis module is used to perform semantic analysis on the memo information to obtain the semantic analysis result;
第二标签生成模块,用于根据语义分析结果,对所述备忘信息生成标签信息。The second label generating module is used to generate label information for the memo information according to the semantic analysis result.
可选地,所述备忘信息处理模块,用于当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,查找与目标标签信息匹配的预设备忘信息并向所述耳机发送。Optionally, the memo information processing module is configured to, when the voice recognition result includes the memo information that characterizes the need to find target tag information, search for pre-device forget information that matches the target tag information and send it to the headset send.
可选地,还包括:Optionally, it also includes:
对话语句获取模块,用于从所述语音识别结果中获取对话语句;A dialogue sentence obtaining module, which is used to obtain a dialogue sentence from the speech recognition result;
答复语句发送模块,用于生成与所述对话语句匹配的答复语句并向所述耳机发送,所述耳机用于播放所述与所述对话语句匹配的答复语句。The reply sentence sending module is used for generating a reply sentence matching the dialogue sentence and sending it to the earphone, and the earphone is used for playing the reply sentence matching the dialogue sentence.
可选地,所述答复语句发送模块,用于获取所述耳机检测的用户方位信息;根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并向所述耳机发送。Optionally, the reply sentence sending module is configured to obtain the user location information detected by the headset; according to the user location information and the dialogue sentence, generate a reply sentence for voice navigation and send it to the headset.
本发明实施例公开了一种耳机,包括有存储器,以及一个或者一个以上的程序,其中一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行所述一个或者一个以上程序包含用于进行以下操作的指令:The embodiment of the present invention discloses a headset, including a memory, and one or more programs. One or more programs are stored in the memory and configured to be executed by one or more processors. The above program contains instructions for the following operations:
向服务器发送用户语音,并从所述服务器获取所述用户语音的语音识别结果;Sending a user voice to a server, and obtaining a voice recognition result of the user voice from the server;
调用交互助手根据所述语音识别结果执行交互操作。The interactive assistant is invoked to perform interactive operations according to the voice recognition result.
可选地,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:Optionally, the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
根据所述语音识别结果唤醒所述交互助手;Wake up the interactive assistant according to the voice recognition result;
获取用户状态;Get user status;
调用所述交互助手根据所述用户状态推荐歌曲或播放歌曲。Invoke the interactive assistant to recommend songs or play songs according to the user status.
可选地,所述耳机具有重力传感器,所述获取用户状态,包括:Optionally, the headset has a gravity sensor, and the acquiring the user status includes:
获取所述重力传感器检测的传感数据,根据所述传感数据确定用户状态。Acquire the sensing data detected by the gravity sensor, and determine the user state according to the sensing data.
可选地,所述调用所述交互助手根据所述用户状态推荐歌曲,包括:Optionally, the invoking the interactive assistant to recommend songs according to the user status includes:
向所述服务器发送所述用户状态;Sending the user status to the server;
接收所述服务器发送的推荐歌曲并调用所述交互助手向用户推荐,所述推荐歌曲为由所述服务器查找的与所述用户状态匹配的歌曲。Receive the recommended song sent by the server and call the interactive assistant to recommend to the user, the recommended song is a song searched by the server that matches the user's status.
可选地,所述调用所述交互助手根据所述用户状态播放歌曲,包括:Optionally, the invoking the interactive assistant to play the song according to the user status includes:
向所述服务器发送所述用户状态;Sending the user status to the server;
接收所述服务器发送的音效调整后的预设歌曲并调用所述交互助手播放;音效调整后的所述预设歌曲为由所述服务器确定与所述用户状态匹配的音效,并将预设歌曲调整为所述音效得到。Receive the preset song after adjusting the sound effect sent by the server and call the interactive assistant to play it; the preset song after the sound effect adjustment is the sound effect determined by the server to match the user status, and the preset song Adjusted to the sound effect.
可选地,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:Optionally, the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放。Invoke the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
可选地,所述调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放,包括:Optionally, the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result, includes:
调用所述交互助手根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并播放。Invoke the interactive assistant to recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain and play the pre-device memo information according to the voice recognition result.
可选地,所述调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放,包括:Optionally, the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result, includes:
调用所述交互助手根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并播放。The interactive assistant is called to recognize and record a target voice from the user's voice according to the voice recognition result, or obtain and play the recorded target voice according to the voice recognition result.
可选地,还包含用于进行以下操作的指令:Optionally, it also contains instructions for performing the following operations:
向所述服务器发送已记录的信息,和/或,获取所述服务器已记录的信息并记录。Send the recorded information to the server, and/or obtain and record the recorded information of the server.
可选地,还包含用于进行以下操作的指令:Optionally, it also contains instructions for performing the following operations:
在记录备忘信息之后,生成针对所述备忘信息的提醒事件。After the memo information is recorded, a reminder event for the memo information is generated.
可选地,还包含用于进行以下操作的指令:Optionally, it also contains instructions for performing the following operations:
从所述服务器获取针对备忘信息的预设提醒事件。Obtain a preset reminder event for the memo information from the server.
可选地,还包含用于进行以下操作的指令:Optionally, it also contains instructions for performing the following operations:
当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。When the trigger condition of the preset reminder event is met, the interactive assistant is called to obtain and play the memo information corresponding to the preset reminder event.
可选地,所述根据所述语音识别结果获取预设备忘信息并播放,包括:Optionally, the acquiring and playing the pre-device forget information according to the voice recognition result includes:
从预设备忘信息中查找与所述语音识别结果匹配的信息;Searching for information that matches the voice recognition result from the pre-device forget information;
调用所述交互助手播放所述与所述语音识别结果匹配的信息。Invoke the interactive assistant to play the information matching the voice recognition result.
可选地,还包含用于进行以下操作的指令:Optionally, it also contains instructions for performing the following operations:
获取由所述服务器对备忘信息进行语义分析得到的语义分析结果;Obtaining a semantic analysis result obtained by the server performing semantic analysis on the memo information;
根据语义分析结果,对所述备忘信息生成标签信息。According to the semantic analysis result, label information is generated for the memo information.
可选地,所述根据所述语音识别结果获取预设备忘信息并播放,包括:Optionally, the acquiring and playing the pre-device forget information according to the voice recognition result includes:
当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,调用所述交互助手查找与目标标签信息匹配的预设备忘信息并播放。When the voice recognition result includes the memo information that characterizes the need to find target tag information, the interactive assistant is called to find and play the pre-device forget information that matches the target tag information.
可选地,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:Optionally, the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
从所述语音识别结果中获取对话语句;Obtaining a dialogue sentence from the speech recognition result;
调用所述交互助手生成与所述对话语句匹配的答复语句并播放。Invoke the interactive assistant to generate a reply sentence matching the dialogue sentence and play it.
可选地,所述调用所述交互助手,生成与所述对话语句匹配的答复语句并播放,包括:Optionally, the invoking the interactive assistant to generate and play a reply sentence matching the dialogue sentence includes:
获取用户方位信息;Obtain user location information;
调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放。The interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence.
可选地,所述耳机具有方位传感器,所述获取用户方位信息,包括:Optionally, the headset has an orientation sensor, and the acquiring user orientation information includes:
获取所述方位传感器检测的用户方位信息。Obtain user position information detected by the position sensor.
可选地,所述调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放,包括:Optionally, the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
获取用户地理位置信息;Obtain user geographic location information;
调用所述交互助手根据所述用户方位信息、所述对话语句和用户地理位置信息,生成用于语音导航的答复语句并播放。The interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information, the dialogue sentence and the user's geographic location information.
可选地,所述调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放,包括:Optionally, the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
向所述服务器发送导航查询信息;所述导航查询信息包括所述用户方位信息和所述对话语句;Sending navigation query information to the server; the navigation query information includes the user location information and the dialogue sentence;
接收所述服务器发送的用于语音导航的答复语句并播放,所述用于语音导航的答复语句由所述服务器根据所述用户方位信息和所述对话语句查询生成。The reply sentence for voice navigation sent by the server is received and played. The reply sentence for voice navigation is generated by the server according to the user location information and the dialog sentence query.
本发明实施例公开了一种服务器,包括有存储器,以及一个或者一个以上的程序,其中一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行所述一个或者一个以上程序包含用于进行以下操作的指令:The embodiment of the present invention discloses a server, including a memory, and one or more programs. One or more programs are stored in the memory and configured to be executed by one or more processors. The above program contains instructions for the following operations:
接收耳机发送的用户语音,并对用户语音进行识别,得到语音识别结果;Receive the user's voice sent by the headset, and recognize the user's voice to obtain the voice recognition result;
向所述耳机发送所述语音识别结果,所述耳机用于调用所述交互助手根据所述语音识别结果执行交互操作。The voice recognition result is sent to the earphone, and the earphone is used to call the interactive assistant to perform an interactive operation according to the voice recognition result.
可选地,还包含用于进行以下操作的指令:Optionally, it also contains instructions for performing the following operations:
获取所述耳机检测的用户状态;Acquiring the user status detected by the headset;
查找与所述用户状态匹配的推荐歌曲,并向所述耳机发送;所述耳机用于调用所述交互助手向用户推荐所述推荐歌曲。Find a recommended song matching the user's state and send it to the headset; the headset is used to call the interactive assistant to recommend the recommended song to the user.
可选地,还包含用于进行以下操作的指令:Optionally, it also contains instructions for performing the following operations:
获取所述耳机检测的用户状态;Acquiring the user status detected by the headset;
确定与所述用户状态匹配的音效;Determining a sound effect matching the user status;
将预设歌曲调整为所述音效,并向所述耳机发送调整音效后的所述预设歌曲,所述耳机用于调用所述交互助手播放调整音效后的所述预设歌曲。The preset song is adjusted to the sound effect, and the preset song with the adjusted sound effect is sent to the earphone, and the earphone is used for invoking the interactive assistant to play the preset song with the adjusted sound effect.
可选地,还包含用于进行以下操作的指令:Optionally, it also contains instructions for performing the following operations:
根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,所述耳机用于播放所述已记录的信息。Recognize and record information from the user's voice according to the voice recognition result, or obtain and send the recorded information to the earphone according to the voice recognition result, and the earphone is used to play the recorded information.
可选地,所述根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,包括:Optionally, the recognizing and recording information from the user's voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset, includes:
根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并向所述耳机发送。Recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain pre-device memo information according to the voice recognition result and send it to the headset.
可选地,所述根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,包括:Optionally, the recognizing and recording information from the user's voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset, includes:
根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并向所述耳机发送。According to the voice recognition result, a target voice is recognized and recorded from the user voice, or a recorded target voice is acquired and sent to the headset according to the voice recognition result.
可选地,还包含用于进行以下操作的指令:Optionally, it also contains instructions for performing the following operations:
向所述耳机发送从所述用户语音识别的信息,和/或,获取所述耳机已记录的信息并 记录。Send the information recognized from the user's voice to the earphone, and/or obtain and record the information recorded by the earphone.
可选地,还包含用于进行以下操作的指令:Optionally, it also contains instructions for performing the following operations:
在记录备忘信息之后,生成针对所述备忘信息的提醒事件;After the memo information is recorded, a reminder event for the memo information is generated;
向所述耳机所述提醒事件,所述耳机用于当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。For the reminder event to the earphone, the earphone is used to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
可选地,还包含用于进行以下操作的指令:Optionally, it also contains instructions for performing the following operations:
对备忘信息进行语义分析,得到语义分析结果;Perform semantic analysis on the memo information to obtain the semantic analysis result;
根据语义分析结果,对所述备忘信息生成标签信息。According to the semantic analysis result, label information is generated for the memo information.
可选地,所述根据所述语音识别结果获取预设备忘信息并向所述耳机发送,包括:Optionally, the acquiring pre-device forget information according to the voice recognition result and sending it to the headset includes:
当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,查找与目标标签信息匹配的预设备忘信息并向所述耳机发送。When the voice recognition result includes memo information that characterizes the need to find target tag information, search for pre-device forget information that matches the target tag information and send it to the headset.
可选地,还包含用于进行以下操作的指令:Optionally, it also contains instructions for performing the following operations:
从所述语音识别结果中获取对话语句;Obtaining a dialogue sentence from the speech recognition result;
生成与所述对话语句匹配的答复语句并向所述耳机发送,所述耳机用于播放所述与所述对话语句匹配的答复语句。A reply sentence matching the dialogue sentence is generated and sent to the earphone, and the earphone is used to play the reply sentence matching the dialogue sentence.
可选地,所述生成与所述对话语句匹配的答复语句并向所述耳机发送,包括:Optionally, the generating and sending a reply sentence matching the conversation sentence to the headset includes:
获取所述耳机检测的用户方位信息;Acquiring user location information detected by the headset;
根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并向所述耳机发送。According to the user location information and the dialogue sentence, a reply sentence for voice navigation is generated and sent to the headset.
本发明实施例公开了一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如上任一项所述的交互方法的步骤。The embodiment of the present invention discloses a computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the interaction method as described in any of the above step.
本发明实施例还公开了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算处理设备上运行时,导致所述计算处理设备执行根据上述任一所述的交互方法。The embodiment of the present invention also discloses a computer program, including computer readable code, which when the computer readable code runs on a computing processing device, causes the computing processing device to execute the interaction method according to any one of the foregoing.
本发明实施例包括以下优点:The embodiments of the present invention include the following advantages:
在本发明实施例中,耳机可以从服务器获取用户语音的语音识别结果,耳机的交互助手可以根据用户语音的语音识别结果进行交互操作,不需要用户使用手操作耳机,实现耳机的多种交互功能。In the embodiment of the present invention, the headset can obtain the voice recognition result of the user's voice from the server, and the interactive assistant of the headset can perform interactive operations according to the voice recognition result of the user's voice. The user does not need to use the hand to operate the headset, and realize the multiple interactive functions of the headset. .
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。The above description is only an overview of the technical solution of the present invention. In order to understand the technical means of the present invention more clearly, it can be implemented in accordance with the content of the specification, and in order to make the above and other objectives, features and advantages of the present invention more obvious and easy to understand. In the following, specific embodiments of the present invention will be cited.
附图说明Description of the drawings
图1是本发明的一种交互方法实施例一的步骤流程图;FIG. 1 is a flowchart of the steps of Embodiment 1 of an interactive method of the present invention;
图2是本发明的一种交互方法实施例二的步骤流程图;2 is a flowchart of the steps of Embodiment 2 of an interactive method of the present invention;
图3是本发明的一种交互方法实施例三的步骤流程图;FIG. 3 is a flowchart of the steps of Embodiment 3 of an interactive method of the present invention;
图4是本发明的一种交互方法实施例四的步骤流程图;FIG. 4 is a flowchart of the steps of Embodiment 4 of an interactive method of the present invention;
图5是本发明的一种交互方法实施例五的步骤流程图;FIG. 5 is a flowchart of steps in Embodiment 5 of an interactive method of the present invention;
图6是本发明的一种交互装置实施例一的结构框图;6 is a structural block diagram of Embodiment 1 of an interactive device of the present invention;
图7是本发明的一种交互装置实施例二的结构框图;FIG. 7 is a structural block diagram of Embodiment 2 of an interactive device of the present invention;
图8是本发明的一种交互装置实施例三的结构框图;8 is a structural block diagram of Embodiment 3 of an interactive device of the present invention;
图9是本发明的一种交互装置实施例四的结构框图;9 is a structural block diagram of Embodiment 4 of an interactive device of the present invention;
图10是一示例性实施例示出的一种用于交互的耳机的结构框图;Fig. 10 is a structural block diagram of a headset for interaction according to an exemplary embodiment;
图11是另一示例性实施例示出的一种用于交互的服务器的结构示意图。Fig. 11 is a schematic structural diagram of a server for interaction shown in another exemplary embodiment.
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
具体实施方式detailed description
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范 围。In order to make the objectives, technical solutions, and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be described clearly and completely in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
参照图1,示出了本发明的一种交互方法实施例一的步骤流程图,该方法应用于耳机,所述耳机与服务器通信连接,所述耳机具有交互助手,所述方法具体可以包括如下步骤:1, there is shown a step flow chart of Embodiment 1 of an interactive method of the present invention. The method is applied to a headset, the headset is in communication with a server, and the headset has an interactive assistant. The method may specifically include the following step:
步骤101,所述耳机向所述服务器发送用户语音,并从所述服务器获取所述用户语音的语音识别结果。Step 101: The headset sends a user voice to the server, and obtains a voice recognition result of the user voice from the server.
耳机是人们日常生活中经常使用的可携带电子设备,耳机可以具有播放功能,拾音功能和通信功能。用户可以使用耳机听歌或进行电话沟通。Earphones are portable electronic devices that people often use in daily life. The earphones can have playback functions, sound pickup functions, and communication functions. Users can use headphones to listen to songs or communicate on the phone.
服务器具有语音识别功能,可以对耳机采集的用户语音进行语音识别。The server has a voice recognition function, which can perform voice recognition on the user's voice collected by the headset.
步骤102,调用所述交互助手根据所述语音识别结果执行交互操作。Step 102: Invoke the interactive assistant to perform an interactive operation according to the voice recognition result.
耳机安装交互助手,交互助手可以是安装在耳机中独立运行的程序,可以提供多种多样的交互功能。交互助手可以根据语音识别结果执行交互操作,以实现耳机的多种交互功能。The headset is installed with an interactive assistant. The interactive assistant can be a program installed in the headset to run independently, and can provide a variety of interactive functions. The interactive assistant can perform interactive operations based on the voice recognition results to implement various interactive functions of the headset.
在本发明实施例中,耳机可以与移动终端通信连接,移动终端可以按照有与耳机的交互助手配套的应用程序APP,用户可以在APP的界面控制交互助手。In the embodiment of the present invention, the headset can be communicatively connected with the mobile terminal, and the mobile terminal can follow an application program APP matched with the interactive assistant of the headset, and the user can control the interactive assistant on the interface of the APP.
交互助手可以通过特定的方式被唤醒,例如,特定的语音指令。交互助手的一些交互功能可以在被唤醒后才执行,一些交互功能可以在不被唤醒的情况下也能执行。The interactive assistant can be awakened in a specific way, for example, a specific voice command. Some interactive functions of the interactive assistant can be executed after being awakened, and some interactive functions can be executed without being awakened.
在本发明实施例中,耳机可以从服务器获取用户语音的语音识别结果,耳机的交互助手可以根据用户语音的语音识别结果进行交互操作,不需要用户使用手操作耳机,实现耳机的多种交互功能。In the embodiment of the present invention, the headset can obtain the voice recognition result of the user's voice from the server, and the interactive assistant of the headset can perform interactive operations according to the voice recognition result of the user's voice. The user does not need to use the hand to operate the headset, and realize the multiple interactive functions of the headset. .
在跑步、开车等场景下,用户不方便拿出手机查找歌曲,为了便于用户查找歌曲,在一实施例中,耳机的交互功能可以包括歌曲推荐功能。In scenarios such as running and driving, it is inconvenient for the user to take out the mobile phone to search for songs. In order to facilitate the user to search for songs, in one embodiment, the interactive function of the earphone may include a song recommendation function.
参照图2,示出了本发明的一种交互方法实施例二的步骤流程图,该方法应用于耳机,所述耳机与服务器通信连接,所述耳机具有交互助手,所述方法具体可以包括如下步骤:2, there is shown a step flow chart of the second embodiment of an interactive method of the present invention. The method is applied to a headset, the headset is in communication with a server, and the headset has an interactive assistant. The method may specifically include the following step:
步骤201,所述耳机向所述服务器发送用户语音,并从所述服务器获取所述用户语音的语音识别结果。Step 201: The headset sends a user voice to the server, and obtains a voice recognition result of the user voice from the server.
步骤202,根据所述语音识别结果唤醒所述交互助手。Step 202: Wake up the interactive assistant according to the voice recognition result.
当语音识别结果包括表征用户需要查找合适的歌曲,或调整歌曲的音效的信息时,唤醒交互助手向用户推荐歌曲的功能。例如,语音识别结果包括:“播放一个适合跑步的歌曲吧”,或“切换到适合跑步的音效”。When the voice recognition result includes information indicating that the user needs to find a suitable song or adjust the sound effect of the song, the interactive assistant is awakened to recommend the song to the user. For example, speech recognition results include: "play a song suitable for running" or "switch to sound effects suitable for running".
步骤203,获取用户状态。Step 203: Obtain the user status.
用户状态,即用户所处的状态,可以包括静坐状态、步行状态、跑步状态、驾驶状态、骑行状态等。The user status, that is, the status of the user, can include sitting status, walking status, running status, driving status, riding status, and so on.
在一种示例中,耳机可以具有可以检测用户状态的重力传感器。所述获取用户状态的步骤可以包括:获取所述重力传感器检测的传感数据,根据所述传感数据确定用户状态。具体的,可以根据重力传感器的传感数据,采用检测用户状态的算法,确定用户状态。In an example, the headset may have a gravity sensor that can detect the state of the user. The step of acquiring the user status may include: acquiring sensor data detected by the gravity sensor, and determining the user status according to the sensor data. Specifically, an algorithm for detecting the user's state can be used to determine the user's state based on the sensing data of the gravity sensor.
步骤204,调用所述交互助手根据所述用户状态推荐歌曲或播放歌曲。Step 204: Invoke the interactive assistant to recommend songs or play songs according to the user status.
在本发明实施例中,交互助手可以根据用户状态向用户推荐歌曲,由用户确定是否播放歌曲;也可以直接播放适配用户状态的歌曲。In the embodiment of the present invention, the interactive assistant can recommend songs to the user according to the user's status, and the user can determine whether to play the song; it can also directly play the song that adapts to the user's status.
具体的,可以预先对预设歌曲列表中的歌曲配置多种标签或分类,例如,“摇滚”、“流行”、“爵士”、“民谣”、“纯音乐”、“节奏感强”、“激情”、“抒情”、“宁静”、“动感”等标签。交互助手可以查找与用户状态匹配的标签的歌曲。例如,当用户状态为“跑步状态”,则可以推荐标签为“节奏感强”的歌曲。Specifically, you can configure multiple tags or categories for the songs in the preset song list in advance, for example, "rock", "pop", "jazz", "folk", "pure music", "strong rhythm", " Labels such as "passion", "lyric", "serenity", and "dynamic". The interactive assistant can search for songs with tags that match the user's status. For example, when the user's status is "running", songs with the tag "strong rhythm" can be recommended.
在一种示例中,可以调用所述交互助手查找与用户状态匹配的推荐歌曲并向用户推荐。In an example, the interactive assistant may be called to find recommended songs that match the user's status and recommend them to the user.
在另一种示例中,可以由服务器查找推荐歌曲。所述调用所述交互助手根据所述用户状态推荐歌曲的步骤可以包括:向所述服务器发送所述用户状态;接收所述服务器发送的推荐歌曲并调用所述交互助手向用户推荐,所述推荐歌曲为由所述服务器查找的与所述用户状态匹配的歌曲。In another example, the server can look up recommended songs. The step of invoking the interactive assistant to recommend songs based on the user status may include: sending the user status to the server; receiving recommended songs sent by the server and invoking the interactive assistant to recommend to the user, the recommendation The song is a song found by the server that matches the state of the user.
在本发明实施例中,交互助手可以根据用户状态,按照匹配的音效播放歌曲。具体的,交互助手可以通过音效算法调整歌曲的音效。音效算法可以将歌曲调整为多种类型的音效,例如,“宁静”、“悠远”、“摇滚”等等。当用户状态为静坐,可以将预设歌曲调整为“宁静”的音效。In the embodiment of the present invention, the interactive assistant can play the song according to the matched sound effect according to the user's status. Specifically, the interactive assistant can adjust the sound effect of the song through the sound effect algorithm. The sound effect algorithm can adjust the song into multiple types of sound effects, such as "quiet", "far away", "rock" and so on. When the user's state is sitting still, the preset song can be adjusted to a "quiet" sound effect.
在一种示例中,可以由交互助手调整歌曲的音效,所述调用所述交互助手根据所述用户状态推荐歌曲的步骤可以包括:调用所述交互助手确定与所述用户状态匹配的音效,并将预设歌曲调整为所述音效;播放调整音效后的所述预设歌曲。In an example, the interactive assistant may adjust the sound effect of the song, and the step of invoking the interactive assistant to recommend a song according to the user state may include: invoking the interactive assistant to determine the sound effect matching the user state, and Adjusting a preset song to the sound effect; playing the preset song after adjusting the sound effect.
在另一种示例中,可以由服务器调整歌曲的音效,所述调用所述交互助手根据所述 用户状态推荐歌曲的步骤可以包括:向所述服务器发送所述用户状态;接收所述服务器发送的音效调整后的预设歌曲并调用所述交互助手播放;音效调整后的所述预设歌曲为由所述服务器确定与所述用户状态匹配的音效,并将预设歌曲调整为所述音效得到。预设歌曲可以是耳机的播放列表中的正在播放歌曲。In another example, the server may adjust the sound effects of the songs, and the step of invoking the interactive assistant to recommend songs according to the user status may include: sending the user status to the server; receiving the information sent by the server The preset song after the sound effect adjustment is called and played by the interactive assistant; the preset song after the sound effect adjustment is the sound effect determined by the server to match the user status, and the preset song is adjusted to the sound effect to obtain . The preset song may be the currently playing song in the playlist of the headset.
在本发明实施例中,耳机可以从服务器获取用户语音的语音识别结果,根据语音识别结果唤醒交互助手并获取用户状态,调用交互助手根据用户状态推荐歌曲或播放歌曲。本发明实施例实现了不需要用户使用手操作耳机,由耳机推荐歌曲或播放歌曲,简化用户的操作过程。In the embodiment of the present invention, the earphone can obtain the voice recognition result of the user's voice from the server, wake up the interactive assistant according to the voice recognition result and obtain the user status, and call the interactive assistant to recommend or play songs according to the user status. The embodiment of the present invention realizes that the user does not need to use the hand to operate the earphone, and the earphone recommends songs or plays the song, which simplifies the user's operation process.
在跑步、开车等场景下,用户不方便拿出手机记录备忘信息或查找备忘信息,为了便于用户使用备忘录,在一实施例中,耳机的交互功能可以包括信息记录交互功能。In scenarios such as running and driving, it is inconvenient for the user to take out the mobile phone to record the memo information or search for the memo information. In order to facilitate the user to use the memo, in one embodiment, the interactive function of the headset may include an information recording interactive function.
参照图3,示出了本发明的一种交互方法实施例三的步骤流程图,该方法应用于耳机,所述耳机与服务器通信连接,所述耳机具有交互助手,所述方法具体可以包括如下步骤:3, there is shown a step flow chart of the third embodiment of an interactive method of the present invention. The method is applied to a headset, the headset is in communication with a server, and the headset has an interactive assistant. The method may specifically include the following step:
步骤301,所述耳机向所述服务器发送用户语音,并从所述服务器获取所述用户语音的语音识别结果。Step 301: The headset sends a user voice to the server, and obtains a voice recognition result of the user voice from the server.
步骤302,调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放。Step 302: Invoke the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
在本发明实施例中,耳机可以向服务器发送已记录的信息,也可以获取服务器已记录的信息并记录。In the embodiment of the present invention, the headset can send the recorded information to the server, and can also obtain and record the recorded information of the server.
在本发明实施例中,所述调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放,包括:调用所述交互助手根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并播放。In the embodiment of the present invention, the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result, includes: invoking The interactive assistant recognizes and records a target voice from the user's voice according to the voice recognition result, or obtains and plays the recorded target voice according to the voice recognition result.
当语音识别结果包括表示需要记录用户语音时,交互助手可以从用户语音识别目标语音。例如,用户说出“录一下音”,则交互助手可以录下之后采集的用户语音。用户可以设置录音的方式,以筛选需要录下的声音。例如,在会议中,用户希望录下参加会议的各个人员的语音,则耳机可以记录全向的用户语音。在课堂中,用户希望记录老师讲课的语音,则耳机可以记录指定方向的用户语音。When the voice recognition result includes that the user's voice needs to be recorded, the interactive assistant can recognize the target voice from the user's voice. For example, if the user says "record a sound", the interactive assistant can record the user's voice collected later. The user can set the recording method to filter the sounds that need to be recorded. For example, in a meeting, if the user wants to record the voice of each person participating in the meeting, the headset can record the omnidirectional user voice. In the classroom, if the user wants to record the teacher's speech, the headset can record the user's speech in the specified direction.
当语音识别结果包括表示需要播放已记录的目标语音时,交互助手可以查找目标语音并播放。When the voice recognition result includes that the recorded target voice needs to be played, the interactive assistant can search for the target voice and play it.
在本发明实施例中,所述调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放,包括:调用所述交互助手根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并播放。In the embodiment of the present invention, the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result, includes: invoking The interactive assistant recognizes and records memo information from the voice recognition result according to the voice recognition result, or obtains and plays the pre-device memo information according to the voice recognition result.
当语音识别结果包括表示需要记录备忘信息的信息时,交互助手可以从语音识别结果提取相关的内容,并记录为备忘信息。交互助手支持指令连说,例如,语音指令的形式可以为“帮我记下+备忘内容”,语音识别结果为“帮我记下,明天上午10点与销售在三楼会议室开会”,“帮我记下”表示需要记录备忘信息,交互助手将“明天上午10点与销售在三楼会议室开会”记录为备忘信息。表示需要记录备忘信息的内容还可以是“帮我几个事”,“帮我记个账”,“帮父母记个事”,“帮我记个车位”等等,可以预先训练语音识别模型来识别表示需要记录备忘信息。When the voice recognition result includes information indicating that memo information needs to be recorded, the interactive assistant can extract relevant content from the voice recognition result and record it as memo information. The interactive assistant supports command and talk. For example, the form of the voice command can be "write down + memo content", and the voice recognition result is "write down for me, and have a meeting with the sales tomorrow in the meeting room on the third floor at 10 o'clock in the morning." "Take it down for me" means that you need to record the memo information, and the interactive assistant will record the "meeting with the sales in the meeting room on the third floor at 10 o'clock tomorrow morning" as the memo information. It means that the content of the memo information that needs to be recorded can also be "Help me a few things", "Help me keep an account", "Help my parents keep a record", "Help me keep a parking space" and so on. Voice recognition can be trained in advance Model to identify the need to record memo information.
当语音识别结果包括表示需要查询备忘信息的信息时,交互助手可以获取预设备忘信息并播放。When the voice recognition result includes information indicating that the memo information needs to be queried, the interactive assistant can obtain the pre-device memo information and play it.
在本发明实施例中,交互助手可以从耳机本地获取备忘信息,也可以从服务器获取备忘信息。In the embodiment of the present invention, the interactive assistant may obtain the memo information locally from the headset, or obtain the memo information from the server.
在本发明实施例中,所述根据所述语音识别结果获取预设备忘信息并播放的步骤可以包括:从预设备忘信息中查找与所述语音识别结果匹配的信息;调用所述交互助手播放所述与所述语音识别结果匹配的信息。In the embodiment of the present invention, the step of acquiring and playing the pre-device forget information according to the voice recognition result may include: searching for information matching the voice recognition result from the pre-device forget information; invoking the interactive assistant to play The information that matches the voice recognition result.
具体的,交互助手可以从备忘信息检索特定信息,例如检索与用户语音中的关键词、时间、地点、分类等信息匹配的信息。例如,语音识别结果为“明天上午跟销售的会议是几点?”,交互助手答复为“10点”。Specifically, the interactive assistant can retrieve specific information from the memo information, for example, retrieve information that matches the keyword, time, location, category, and other information in the user's voice. For example, the voice recognition result is "What time is the meeting with sales tomorrow morning?", and the interactive assistant answers "10 o'clock".
在本发明实施例中,耳机可以对备忘信息进行语义分析,得到语义分析结果。耳机也可以从服务器获取,由服务器对备忘信息进行语义分析得到的语义分析结果。耳机可以根据语义分析结果,对备忘信息生成标签信息。In the embodiment of the present invention, the earphone can perform semantic analysis on the memo information to obtain the semantic analysis result. The headset can also be obtained from the server, and the semantic analysis result obtained by the semantic analysis of the memo information by the server. The headset can generate tag information for the memo information according to the semantic analysis result.
具体的,服务器可以采用自然语言理解的算法,对语音识别结果进行语义分析,得到语义分析结果。Specifically, the server may use a natural language understanding algorithm to perform semantic analysis on the speech recognition result to obtain the semantic analysis result.
在本发明实施例中,所述根据所述语音识别结果获取预设备忘信息并播放的步骤可以包括:当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,调用 所述交互助手查找与目标标签信息匹配的预设备忘信息并播放。In the embodiment of the present invention, the step of acquiring and playing the pre-device forget information according to the voice recognition result may include: when the voice recognition result includes a requirement to find memo information with target tag information, calling the The interactive assistant finds and plays the pre-device forgetting information that matches the target tag information.
标签信息可以包括分类标签、属性标签等信息。交互助手可以根据语义分析结果,生成相应的标签信息。基于语义分析,交互助手可以对备忘信息进行分类或添加标签,例如,用户说完“帮我记下,明天上午10点与销售在三楼会议室开会”,基于语义分析,这条备忘信息属于待办事项类,用户除了通过关键词检索之外,还可以通过标签信息进行检索,比如用户语音为“明天我有什么待办事项?”,交互助手查找具有“待办事项”标签信息的备忘信息。The label information may include information such as classification labels and attribute labels. The interactive assistant can generate corresponding label information according to the semantic analysis result. Based on semantic analysis, the interactive assistant can categorize or add tags to the memo. For example, after the user finishes saying "Take it down for me, I will have a meeting with the sales in the meeting room on the third floor at 10 am tomorrow." Based on semantic analysis, this memo Information belongs to the to-do category. In addition to searching by keywords, users can also search by tag information. For example, the user’s voice is "What do I have to-do tomorrow?", the interactive assistant searches for information with the tag "to-do" Memo information.
在本发明实施例中,交互助手可以在记录备忘信息之后,生成针对备忘信息的提醒事件。也可以从服务器获取,由服务器针对备忘信息生成的提醒事件。提醒事件可以包括提醒内容和触发条件,提醒内容即为备忘信息,触发条件即触发提醒事件的条件,例如达到设定的时间。In the embodiment of the present invention, the interactive assistant may generate a reminder event for the memo information after recording the memo information. It can also be obtained from the server, and the reminder event generated by the server for the memo information. The reminder event may include reminder content and trigger conditions. The reminder content is memo information, and the trigger condition is a condition for triggering the reminder event, such as reaching a set time.
在本发明实施例中,当满足预设提醒事件的触发条件时,可以调用交互助手获取预设提醒事件相应的备忘信息并播放。例如,提醒事件的触发条件为“时间达到9:45”,则耳机播放提醒事件对应的备忘信息,耳机通过语音提醒“您在10点安排了在三楼会议室,与销售开会,请提前安排”。In the embodiment of the present invention, when the trigger condition of the preset reminder event is met, the interactive assistant may be called to obtain and play the memo information corresponding to the preset reminder event. For example, if the trigger condition of a reminder event is "time reaches 9:45", the headset will play the memo information corresponding to the reminder event, and the headset will remind you by voice "You have arranged a meeting with sales in the third floor meeting room at 10 o'clock, please advance arrange".
在本发明实施例中,耳机可以从服务器获取用户语音的语音识别结果;调用交互助手根据语音识别结果从用户语音中识别信息并记录,或,根据语音识别结果获取记录的信息并播放。本发明实施例实现了不需要用户使用手操作耳机,可以通过耳机记录信息或播放已记录的信息,简化用户的操作过程。In the embodiment of the present invention, the headset can obtain the voice recognition result of the user's voice from the server; call the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result. The embodiment of the present invention realizes that the user does not need to use a hand to operate the earphone, and can record information or play recorded information through the earphone, which simplifies the user's operation process.
在步行、骑行等场景下,用户不方便拿出手机进行查询,为了方便用户查询,在一实施例中,耳机的交互功能可以包括问答交互功能。In scenarios such as walking, cycling, etc., it is inconvenient for the user to take out the mobile phone to make an inquiry. To facilitate the user's inquiry, in one embodiment, the interactive function of the headset may include a question-and-answer interactive function.
参照图4,示出了本发明的一种交互方法实施例四的步骤流程图,该方法应用于耳机,所述耳机与服务器通信连接,所述耳机具有交互助手,所述方法具体可以包括如下步骤:4, there is shown a step flow chart of Embodiment 4 of an interactive method of the present invention. The method is applied to a headset, the headset is in communication with a server, and the headset has an interactive assistant. The method may specifically include the following step:
步骤401,所述耳机向所述服务器发送用户语音,并从所述服务器获取所述用户语音的语音识别结果。Step 401: The headset sends a user voice to the server, and obtains a voice recognition result of the user voice from the server.
步骤402,从所述语音识别结果中获取对话语句。Step 402: Obtain a dialogue sentence from the speech recognition result.
在本发明实施例中,耳机的交互助手可以与用户进行对话,可以从语音识别结果中获取用户的对话语句。In the embodiment of the present invention, the interactive assistant of the headset can have a conversation with the user, and can obtain the conversation sentence of the user from the voice recognition result.
步骤403,调用所述交互助手生成与所述对话语句匹配的答复语句并播放。Step 403: Invoke the interactive assistant to generate a reply sentence matching the dialogue sentence and play it.
交互助手可以根据用户的对话语句,生成匹配的答复语句并播放,从而与用户进行语音问答。The interactive assistant can generate and play matching reply sentences according to the user's dialogue sentences, so as to conduct voice questions and answers with the user.
在本发明实施例中,所述调用所述交互助手,生成与所述对话语句匹配的答复语句并播放的步骤可以包括:获取用户方位信息;调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放。In the embodiment of the present invention, the step of invoking the interactive assistant to generate and play a reply sentence matching the dialogue sentence may include: obtaining user location information; invoking the interactive assistant according to the user location information and The dialogue sentence is described, and the reply sentence for voice navigation is generated and played.
用户方位信息是指用户的正面朝向,在本发明实施例中,耳机可以具有方位传感器,在用户佩戴耳机时,方位传感器可以实时检测用户方位信息。交互助手可以获取方位传感器检测的用户方位信息。The user orientation information refers to the frontal orientation of the user. In the embodiment of the present invention, the headset may have an orientation sensor. When the user wears the headset, the orientation sensor can detect the user's orientation information in real time. The interactive assistant can obtain the user's position information detected by the position sensor.
在一种示例中,所述调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放的步骤可以包括:获取用户地理位置信息;调用所述交互助手根据所述用户方位信息、所述对话语句和用户地理位置信息,生成用于语音导航的答复语句并播放。In an example, the step of invoking the interaction assistant to generate and play a reply sentence for voice navigation based on the user location information and the dialogue sentence may include: obtaining user geographic location information; invoking the interaction The assistant generates and plays a reply sentence for voice navigation according to the user's location information, the dialogue sentence and the user's geographic location information.
耳机可以获取当前的用户地理位置信息,例如,耳机检测当前的用户地理位置信息。又例如,耳机可以与移动设备通信连接。移动设备可以具备定位能力,例如设置GPS模块,或者通过与基站通信进行定位,耳机可以获取由移动设备检测的当前的用户地理位置信息。The headset can obtain the current user's geographic location information, for example, the headset detects the current user's geographic location information. For another example, the headset can be communicatively connected with the mobile device. The mobile device may have positioning capabilities, for example, by setting a GPS module, or positioning by communicating with a base station, the headset may obtain the current user's geographic location information detected by the mobile device.
在另一种示例,所述调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放的步骤可以包括:向所述服务器发送导航查询信息;所述导航查询信息包括所述用户方位信息和所述对话语句;接收所述服务器发送的用于语音导航的答复语句并播放,所述用于语音导航的答复语句由所述服务器根据所述用户方位信息和所述对话语句查询生成。In another example, the step of invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence may include: sending navigation query information to the server; The navigation query information includes the user location information and the dialogue sentence; the reply sentence for voice navigation sent by the server is received and played, and the reply sentence for voice navigation is determined by the server according to the user position. Information and the dialogue sentence query is generated.
服务器可以根据用户方位信息、对话语句和用户地理位置信息查询生成用于语音导航的答复语句,然后将答复语句发送回耳机。The server can generate a reply sentence for voice navigation according to the user's location information, dialogue sentence, and user's geographic location information query, and then send the reply sentence back to the headset.
耳机的交互助手可以与用户进行语音导航交互,在交互过程中可以根据实时的用户方位信息、实时的用户地理位置信息和用户不断说出的语音,播放导航语音。The interactive assistant of the headset can interact with the user for voice navigation. During the interaction, the navigation voice can be played based on the real-time user location information, real-time user geographic location information, and the voice continuously spoken by the user.
例如,用户:附近有什么好吃的?For example, user: What's delicious nearby?
交互助手:您想吃什么风格的?Interactive Assistant: What style would you like to eat?
用户:四川风格。User: Sichuan style.
交互助手:附近800米有家“眉州东坡”评价不错,您考虑去吗?Interactive Assistant: There is a "Meizhou Dongpo" house 800 meters away that has a good evaluation. Would you consider going there?
用户:好的。User: Okay.
交互助手:现在帮您导航到“眉州东坡”,您看到前方有个红色高楼了吗?Interactive Assistant: Now I will help you navigate to "Meizhou Dongpo", do you see a red tall building ahead?
用户:看到了。User: I saw it.
交互助手:您朝红色高楼方向走大概200米。Interactive assistant: You walk about 200 meters in the direction of the red tall building.
根据实时的用户地理位置信息确定用户此时开始向红色高楼方向步行。According to the real-time user location information, it is determined that the user starts to walk toward the red tall building at this time.
交互助手:红色高楼下面有个理发店,您看到理发店之后,右转Interactive assistant: there is a barber shop under the red tall building. When you see the barber shop, turn right
根据实时的用户方位信息,确定用户此时开始右转。According to the real-time user location information, it is determined that the user starts to turn right at this time.
交互助手:现在距离目的地还有600米。Interactive Assistant: There are still 600 meters away from the destination.
交互助手:您看到前面有个十字路口了吗?Interactive Assistant: Do you see an intersection ahead?
用户:看到了。User: I saw it.
交互助手:您走到十字路口之后,左转。Interactive Assistant: After you reach the crossroad, turn left.
根据实时的用户地理位置信息和用户方位信息,确定用户走到十字路口之后,左转。According to the real-time user location information and user location information, it is determined that the user walks to the intersection and turns left.
交互助手:继续直行,“眉州东坡”在前方100米处。Interactive Assistant: Continue straight ahead, "Meizhou Dongpo" is 100 meters ahead.
根据实时的用户地理位置信息确定用户继续前行。According to the real-time user location information, the user is determined to move forward.
交互助手:“眉州东坡”在您的左手边,导航已完成,祝您用餐愉快。Interactive assistant: "Meizhou Dongpo" is on your left, navigation is complete, I wish you a pleasant meal.
在本发明实施例中,耳机可以从服务器获取用户语音的语音识别结果;从语音识别结果中获取对话语句,然后调用交互助手生成与对话语句匹配的答复语句并播放。本发明实施例实现了不需要用户使用手操作耳机,可以由耳机根据用户语音进行问答,简化用户的操作过程。In the embodiment of the present invention, the headset can obtain the voice recognition result of the user's voice from the server; obtain the dialogue sentence from the voice recognition result, and then call the interactive assistant to generate and play the reply sentence matching the dialogue sentence. The embodiment of the present invention realizes that the user does not need to use the hand to operate the earphone, and the earphone can conduct question and answer according to the user's voice, which simplifies the user's operation process.
以下从服务器的角度进行说明。The following describes from the perspective of the server.
参照图5,示出了本发明的一种交互方法实施例五的步骤流程图,该方法应用于服务器,服务器与耳机通信连接,耳机具有交互助手,所述方法包括:5, there is shown a step flow chart of Embodiment 5 of an interactive method of the present invention. The method is applied to a server, and the server communicates with a headset, and the headset has an interactive assistant. The method includes:
步骤501,所述服务器接收所述耳机发送的用户语音,并对用户语音进行识别,得到语音识别结果. Step 501, the server receives the user voice sent by the headset, and recognizes the user voice to obtain a voice recognition result.
在本发明实施例中,服务器具有语音识别功能,可以对耳机采集的用户语音进行语音识别,得到语音识别结果。In the embodiment of the present invention, the server has a voice recognition function, and can perform voice recognition on the user's voice collected by the headset to obtain a voice recognition result.
步骤502,向所述耳机发送所述语音识别结果,所述耳机用于调用所述交互助手根据所述语音识别结果执行交互操作。Step 502: Send the voice recognition result to the headset, where the headset is used to invoke the interactive assistant to perform an interactive operation according to the voice recognition result.
耳机安装有交互助手,交互助手可以是安装在耳机的程序,可以提供多种多样的交互功能。耳机接收到服务器发送的语音识别结果后,可以调用交互助手根据语音识别结果执行交互操作,以实现耳机的多种交互功能。The headset is equipped with an interactive assistant, which can be a program installed on the headset, and can provide a variety of interactive functions. After the headset receives the voice recognition result sent by the server, the interactive assistant can be called to perform interactive operations according to the voice recognition result, so as to realize various interactive functions of the headset.
交互助手可以通过特定的方式被唤醒,例如,特定的语音指令。交互助手的一些交互功能可以在被唤醒后才执行,一些交互功能可以在不被唤醒的情况下也能执行。The interactive assistant can be awakened in a specific way, for example, a specific voice command. Some interactive functions of the interactive assistant can be executed after being awakened, and some interactive functions can be executed without being awakened.
在本发明实施例中,服务器可以将用户语音的语音识别结果发送给耳机,耳机的交互助手可以根据用户语音的语音识别结果进行交互操作,不需要用户使用手操作耳机,实现耳机的多种交互功能。In the embodiment of the present invention, the server can send the voice recognition result of the user's voice to the headset, and the interactive assistant of the headset can perform interactive operations according to the voice recognition result of the user's voice. The user does not need to use the hand to operate the headset to realize various interactions of the headset. Features.
在一种实施例中,耳机的交互功能可以包括歌曲推荐功能。耳机可以调用交互助手根据语音识别结果,获取用户状态。当语音识别结果包括表征用户需要查找合适的歌曲或调整歌曲的音效的信息时,耳机的交互助手可以请求服务器来推荐歌曲或调整歌曲的音效。In an embodiment, the interactive function of the headset may include a song recommendation function. The headset can call the interactive assistant to obtain the user status according to the voice recognition result. When the voice recognition result includes information indicating that the user needs to find a suitable song or adjust the sound effect of the song, the interactive assistant of the headset may request the server to recommend the song or adjust the sound effect of the song.
在一种示例中,服务器可以获取耳机检测的用户状态;查找与用户状态匹配的推荐歌曲,并向耳机发送,以使耳机调用所述交互助手向用户推荐所述推荐歌曲。在另一种示例中,服务器可以获取耳机检测的用户状态;确定与用户状态匹配的音效;将预设歌曲调整为所述音效,并向耳机发送调整音效后的预设歌曲,以使耳机调用交互助手播放调整音效后的预设歌曲。In an example, the server may obtain the user status detected by the headset; find a recommended song matching the user status, and send it to the headset, so that the headset calls the interactive assistant to recommend the recommended song to the user. In another example, the server can obtain the user status detected by the headset; determine the sound effect matching the user status; adjust the preset song to the sound effect, and send the preset song with the adjusted sound effect to the headset so that the headset can call The interactive assistant plays the preset songs after adjusting the sound effects.
在另一种实施例中,耳机的交互功能可以包括信息记录交互功能。服务器可以根据语音识别结果从用户语音中识别信息并记录,或,根据语音识别结果获取已记录的信息并向耳机发送,以使耳机播放已记录的信息。In another embodiment, the interactive function of the headset may include an information recording interactive function. The server may recognize and record information from the user's voice according to the voice recognition result, or obtain the recorded information according to the voice recognition result and send it to the headset, so that the headset can play the recorded information.
服务器可以向耳机发送已记录的信息,也可以获取耳机已记录的信息并记录。The server can send the recorded information to the headset, or obtain and record the recorded information of the headset.
在一种示例中,服务器可以根据语音识别结果从用户语音,识别目标语音并记录,或根据语音识别结果获取已记录的目标语音并向耳机发送,以使耳机播放目标语音。在另一种示例中,服务器可以根据语音识别结果从语音识别结果中识别备忘信息并记录,或根据语音识别结果获取预设备忘信息并向耳机发送,以使耳机播放预设备忘信息。In an example, the server may recognize and record the target voice from the user's voice according to the voice recognition result, or obtain the recorded target voice according to the voice recognition result and send it to the headset, so that the headset can play the target voice. In another example, the server may recognize and record the memo information from the voice recognition result according to the voice recognition result, or obtain the pre-device forget information according to the voice recognition result and send it to the headset, so that the headset can play the pre-device forget information.
服务器可以在记录备忘信息之后,生成针对备忘信息的提醒事件;向所述耳机所述提醒事件,以使当满足预设提醒事件的触发条件时,耳机可以调用交互助手获取预设提醒事件相应的备忘信息并播放。The server may generate a reminder event for the memo information after recording the memo information; the reminder event to the headset, so that when the trigger condition of the preset reminder event is met, the headset can call the interactive assistant to obtain the preset reminder event The corresponding memo information and play.
服务器还可以对备忘信息进行语义分析,得到语义分析结果;根据语义分析结果,对备忘信息生成标签信息。当语音识别结果包括表征需求查找具有目标标签信息的备忘信息时服务器可以,查找与目标标签信息匹配的预设备忘信息并向耳机发送,以使耳机播放与目标标签信息匹配的预设备忘信息。The server can also perform semantic analysis on the memo information to obtain a semantic analysis result; according to the semantic analysis result, generate label information for the memo information. When the voice recognition result includes the requirement to find memo information with target tag information, the server can search for the pre-device forget information that matches the target tag information and send it to the headset, so that the headset can play the pre-device forget information that matches the target tag information .
在另一实施例中,耳机的交互功能可以包括问答交互功能。服务器从语音识别结果中获取对话语句;生成与对话语句匹配的答复语句所述耳机发送,以使耳机播放与对话语句匹配的答复语句,实现与用户的问答交互。In another embodiment, the interactive function of the headset may include a question-and-answer interactive function. The server obtains the dialog sentence from the voice recognition result; generates a reply sentence matching the dialog sentence and sends the headset to make the headset play the reply sentence matching the dialog sentence to realize the question and answer interaction with the user.
在一种示例中,问答交互功能可以包括语音导航,服务器可以获取耳机检测的用户方位信息;根据用户方位信息和对话语句,生成用于语音导航的答复语句并向耳机发送,以使耳机播放用于语音导航的答复语句。In an example, the question and answer interactive function may include voice navigation, and the server may obtain the user's location information detected by the headset; according to the user's location information and the dialogue sentence, a reply sentence for voice navigation is generated and sent to the headset, so that the headset is used for playback. Reply sentence for voice navigation.
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明实施例并不受所描述的动作顺序的限制,因为依据本发明实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明实施例所必须的。It should be noted that, for the sake of simple description, the method embodiments are all expressed as a series of action combinations, but those skilled in the art should know that the embodiments of the present invention are not limited by the described sequence of actions, because According to the embodiments of the present invention, some steps may be performed in other order or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the actions involved are not necessarily required by the embodiments of the present invention.
参照图6,示出了本发明的一种交互装置实施例一的结构框图,该交互装置应用于耳机,所述耳机与服务器通信连接,所述耳机具有交互助手,所述装置具体可以包括如下模块:6, there is shown a structural block diagram of Embodiment 1 of an interactive device of the present invention. The interactive device is applied to a headset. The headset is in communication with a server. The headset has an interactive assistant. The device may specifically include the following Module:
语音识别结果获取模块601,用于向所述服务器发送用户语音,并从所述服务器获取所述用户语音的语音识别结果;The voice recognition result obtaining module 601 is configured to send a user voice to the server, and obtain a voice recognition result of the user voice from the server;
第一交互模块602,用于调用所述交互助手根据所述语音识别结果执行交互操作。The first interaction module 602 is configured to invoke the interaction assistant to perform an interaction operation according to the voice recognition result.
参照图7,示出了本发明的一种交互装置实施例二的结构框图,该交互装置应用于耳机,所述耳机与服务器通信连接,所述耳机具有交互助手,所述装置具体可以包括如下模块:Referring to FIG. 7, there is shown a structural block diagram of Embodiment 2 of an interactive device of the present invention. The interactive device is applied to a headset. The headset is in communication with a server. The headset has an interactive assistant. The device may specifically include the following Module:
语音识别结果获取模块701,用于向所述服务器发送用户语音,并从所述服务器获取所述用户语音的语音识别结果;The voice recognition result obtaining module 701 is configured to send a user voice to the server, and obtain a voice recognition result of the user voice from the server;
第一交互模块702,用于调用所述交互助手根据所述语音识别结果执行交互操作。The first interaction module 702 is configured to call the interaction assistant to perform an interaction operation according to the voice recognition result.
在本发明实施例中,所述第一交互模块702可以包括:In the embodiment of the present invention, the first interaction module 702 may include:
唤醒模块7021,用于根据所述语音识别结果唤醒所述交互助手;The wake-up module 7021 is used to wake up the interactive assistant according to the voice recognition result;
用户状态获取模块7022,用于获取用户状态;The user status acquisition module 7022 is used to acquire the user status;
歌曲交互模块7023,用于调用所述交互助手根据所述用户状态推荐歌曲或播放歌曲。The song interaction module 7023 is used to call the interactive assistant to recommend songs or play songs according to the user status.
在本发明实施例中,所述耳机具有重力传感器,所述用户状态获取模块7022,用于获取所述重力传感器检测的传感数据,根据所述传感数据确定用户状态。In the embodiment of the present invention, the headset has a gravity sensor, and the user state acquisition module 7022 is configured to acquire sensor data detected by the gravity sensor, and determine the user status according to the sensor data.
在本发明实施例中,所述歌曲交互模块7023,用于向所述服务器发送所述用户状态;接收所述服务器发送的推荐歌曲并调用所述交互助手向用户推荐,所述推荐歌曲为由所述服务器查找的与所述用户状态匹配的歌曲。In the embodiment of the present invention, the song interaction module 7023 is configured to send the user status to the server; to receive the recommended song sent by the server and call the interactive assistant to recommend to the user, the recommended song is the reason The song found by the server matches the state of the user.
在本发明实施例中,所述歌曲交互模块7023,用于向所述服务器发送所述用户状态;接收所述服务器发送的音效调整后的预设歌曲并调用所述交互助手播放;音效调整后的所述预设歌曲为由所述服务器确定与所述用户状态匹配的音效,并将预设歌曲调整为所述音效得到。In the embodiment of the present invention, the song interaction module 7023 is configured to send the user status to the server; receive the preset song after the sound effect adjustment sent by the server and call the interactive assistant to play it; after the sound effect adjustment The preset song is a sound effect determined by the server to match the user status, and the preset song is adjusted to the sound effect.
在本发明实施例中,所述第一交互模块702可以包括:In the embodiment of the present invention, the first interaction module 702 may include:
第一记录交互模块7024,用于调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放。The first recording interaction module 7024 is configured to call the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
在本发明实施例中,所述第一记录交互模块7024,用于调用所述交互助手根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并播放。In the embodiment of the present invention, the first recording interaction module 7024 is configured to call the interactive assistant to recognize and record memo information from the voice recognition result according to the voice recognition result, or according to the voice recognition result Get the pre-device forgotten information and play it.
在本发明实施例中,所述第一记录交互模块7024,用于调用所述交互助手根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并播放。In the embodiment of the present invention, the first recording interaction module 7024 is used to call the interactive assistant to recognize and record the target voice from the user voice according to the voice recognition result, or to obtain the recorded voice according to the voice recognition result. Record the target voice and play it.
在本发明实施例中,所述的交互装置还可以包括:In the embodiment of the present invention, the interaction device may further include:
第一记录信息传输模块703,用于向所述服务器发送已记录的信息,和/或,获取所述服务器已记录的信息并记录。The first recorded information transmission module 703 is configured to send recorded information to the server, and/or obtain and record the recorded information of the server.
在本发明实施例中,所述的交互装置还可以包括:In the embodiment of the present invention, the interaction device may further include:
第一提醒事件生成模块704,用于在记录备忘信息之后,生成针对所述备忘信息的提醒事件。The first reminder event generating module 704 is configured to generate a reminder event for the memo information after the memo information is recorded.
在本发明实施例中,所述的交互装置还可以包括:In the embodiment of the present invention, the interaction device may further include:
第一提醒事件获取模块705,用于从所述服务器获取针对备忘信息的预设提醒事件。The first reminder event obtaining module 705 is configured to obtain a preset reminder event for the memo information from the server.
在本发明实施例中,所述的交互装置还可以包括:In the embodiment of the present invention, the interaction device may further include:
第一提醒事件触发模块706,用于当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。The first reminder event triggering module 706 is configured to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
在本发明实施例中,所述第一记录交互模块7024,用于从预设备忘信息中查找与所述语音识别结果匹配的信息;调用所述交互助手播放所述与所述语音识别结果匹配的信息。In the embodiment of the present invention, the first recording interaction module 7024 is configured to search for information that matches the voice recognition result from the pre-device forget information; call the interactive assistant to play the voice recognition result that matches Information.
在本发明实施例中,所述的交互装置还可以包括:In the embodiment of the present invention, the interaction device may further include:
第一语义分析模块707,用于获取由所述服务器对备忘信息进行语义分析得到的语义分析结果;The first semantic analysis module 707 is configured to obtain a semantic analysis result obtained by the server performing semantic analysis on the memo information;
第一标签生成模块708,用于根据语义分析结果,对所述备忘信息生成标签信息。The first tag generation module 708 is configured to generate tag information for the memo information according to the semantic analysis result.
在本发明实施例中,所述第一记录交互模块7024,用于当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,调用所述交互助手查找与目标标签信息匹配的预设备忘信息并播放。In the embodiment of the present invention, the first record interaction module 7024 is configured to call the interactive assistant to find the memo information that matches the target tag information when the voice recognition result includes the memo information that characterizes the need to find target tag information. Pre-device forgets the information and plays it.
在本发明实施例中,所述第一交互模块702可以包括:In the embodiment of the present invention, the first interaction module 702 may include:
第一对话语句获取模块7025,用于从所述语音识别结果中获取对话语句;The first dialogue sentence obtaining module 7025 is configured to obtain the dialogue sentence from the voice recognition result;
第一对话交互模块7026,用于调用所述交互助手生成与所述对话语句匹配的答复语句并播放。The first dialogue interaction module 7026 is configured to call the interactive assistant to generate and play a reply sentence matching the dialogue sentence.
在本发明实施例中,所述第一对话交互模块7026,用于获取用户方位信息;调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放。In the embodiment of the present invention, the first dialogue interaction module 7026 is used to obtain user location information; call the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence .
在本发明实施例中,所述耳机具有方位传感器,所述第一对话交互模块7026,用于获取所述方位传感器检测的用户方位信息。In the embodiment of the present invention, the headset has an orientation sensor, and the first dialogue interaction module 7026 is configured to obtain user orientation information detected by the orientation sensor.
在本发明实施例中,所述第一对话交互模块7026,用于获取用户地理位置信息;调用所述交互助手根据所述用户方位信息、所述对话语句和用户地理位置信息,生成用于语音导航的答复语句并播放。In the embodiment of the present invention, the first dialogue interaction module 7026 is configured to obtain user geographic location information; call the interactive assistant to generate voice information based on the user location information, the dialogue sentence, and the user geographic location information Navigate the reply sentence and play it.
在本发明实施例中,所述第一对话交互模块7026,用于向所述服务器发送导航查询信息;所述导航查询信息包括所述用户方位信息和所述对话语句;接收所述服务器发送的用于语音导航的答复语句并播放,所述用于语音导航的答复语句由所述服务器根据所述用户方位信息和所述对话语句查询生成。In the embodiment of the present invention, the first dialog interaction module 7026 is configured to send navigation query information to the server; the navigation query information includes the user location information and the dialog sentence; and receives the information sent by the server The reply sentence used for voice navigation is played, and the reply sentence used for voice navigation is generated by the server according to the user location information and the dialog sentence query.
参照图8,示出了本发明的一种交互装置实施例三的结构框图,该交互装置应用于服务器,所述服务器与耳机通信连接,所述耳机具有交互助手,所述装置具体可以包括如下模块:Referring to FIG. 8, there is shown a structural block diagram of Embodiment 3 of an interactive device of the present invention. The interactive device is applied to a server, and the server is in communication connection with a headset. The headset has an interactive assistant. The device may specifically include the following Module:
语音识别模块801,用于接收所述耳机发送的用户语音,并对用户语音进行识别,得到语音识别结果;The voice recognition module 801 is configured to receive the user voice sent by the headset, and recognize the user voice to obtain a voice recognition result;
语音识别结果发送模块802,用于向所述耳机发送所述语音识别结果,所述耳机用于调用所述交互助手根据所述语音识别结果执行交互操作。The voice recognition result sending module 802 is configured to send the voice recognition result to the earphone, and the earphone is used to call the interactive assistant to perform an interactive operation according to the voice recognition result.
参照图9,示出了本发明的一种交互装置实施例四的结构框图,该交互装置应用于服务器,所述服务器与耳机通信连接,所述耳机具有交互助手,所述装置具体可以包括如下模块:9, there is shown a structural block diagram of Embodiment 4 of an interactive device of the present invention. The interactive device is applied to a server, and the server is in communication connection with a headset. The headset has an interactive assistant. The device may specifically include the following Module:
语音识别模块901,用于接收所述耳机发送的用户语音,并对用户语音进行识别,得到语音识别结果;The voice recognition module 901 is configured to receive the user voice sent by the headset, and recognize the user voice to obtain a voice recognition result;
语音识别结果发送模块902,用于向所述耳机发送所述语音识别结果,所述耳机用于调用所述交互助手根据所述语音识别结果执行交互操作。The voice recognition result sending module 902 is configured to send the voice recognition result to the earphone, and the earphone is used to call the interactive assistant to perform an interactive operation according to the voice recognition result.
在本发明实施例中,所述的交互装置还可以包括:In the embodiment of the present invention, the interaction device may further include:
第一用户状态获取模块903,用于获取所述耳机检测的用户状态;The first user status acquiring module 903 is configured to acquire the user status detected by the headset;
第一歌曲发送模块904,用于查找与所述用户状态匹配的推荐歌曲,并向所述耳机发送;所述耳机用于调用所述交互助手向用户推荐所述推荐歌曲。The first song sending module 904 is configured to find a recommended song matching the user's status and send it to the earphone; the earphone is used to call the interactive assistant to recommend the recommended song to the user.
在本发明实施例中,所述的交互装置还可以包括:In the embodiment of the present invention, the interaction device may further include:
第二用户状态获取模块905,用于获取所述耳机检测的用户状态;The second user status acquiring module 905 is configured to acquire the user status detected by the headset;
音效确定模块906,用于确定与所述用户状态匹配的音效;The sound effect determining module 906 is configured to determine a sound effect matching the user state;
第二歌曲发送模块907,用于将预设歌曲调整为所述音效,并向所述耳机发送调整音效后的所述预设歌曲,所述耳机用于调用所述交互助手播放调整音效后的所述预设歌曲。The second song sending module 907 is configured to adjust a preset song to the sound effect, and send the preset song with the adjusted sound effect to the earphone, and the earphone is used to call the interactive assistant to play the adjusted sound effect The preset song.
在本发明实施例中,所述的交互装置还可以包括:In the embodiment of the present invention, the interaction device may further include:
记录信息处理模块908,用于根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,所述耳机用于播放所述已记录的信息。The recorded information processing module 908 is configured to recognize and record information from the user's voice according to the voice recognition result, or obtain recorded information according to the voice recognition result and send it to the earphone. The earphone is used for Play the recorded information.
在本发明实施例中,所述记录信息处理模块908可以包括:In the embodiment of the present invention, the record information processing module 908 may include:
备忘信息处理模块9081,用于根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并向所述耳机发送。The memo information processing module 9081 is configured to recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain pre-device memo information according to the voice recognition result and send it to the headset.
在本发明实施例中,所述记录信息处理模块908可以包括:In the embodiment of the present invention, the record information processing module 908 may include:
语音处理模块9082,用于根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并向所述耳机发送。The voice processing module 9082 is configured to recognize and record a target voice from the user voice according to the voice recognition result, or obtain a recorded target voice according to the voice recognition result and send it to the headset.
在本发明实施例中,所述的交互装置还可以包括:In the embodiment of the present invention, the interaction device may further include:
第二记录信息传输模块909,用于向所述耳机发送从所述用户语音识别的信息,和/或,获取所述耳机已记录的信息并记录。The second recorded information transmission module 909 is configured to send the information recognized from the user's voice to the earphone, and/or obtain and record the information recorded by the earphone.
在本发明实施例中,所述的交互装置还可以包括:In the embodiment of the present invention, the interaction device may further include:
第二提醒事件生成模块910,用于在记录备忘信息之后,生成针对所述备忘信息的提醒事件;The second reminder event generating module 910 is configured to generate a reminder event for the memo information after the memo information is recorded;
提醒事件发送模块911,用于向所述耳机所述提醒事件,所述耳机用于当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。The reminder event sending module 911 is configured to send the reminder event to the earphone, and the earphone is used to call the interactive assistant to obtain the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met And play.
在本发明实施例中,所述的交互装置还可以包括:In the embodiment of the present invention, the interaction device may further include:
第二语义分析模块912,用于对备忘信息进行语义分析,得到语义分析结果;The second semantic analysis module 912 is configured to perform semantic analysis on the memo information to obtain a semantic analysis result;
第二标签生成模块913,用于根据语义分析结果,对所述备忘信息生成标签信息。The second label generating module 913 is configured to generate label information for the memo information according to the semantic analysis result.
在本发明实施例中,所述备忘信息处理模块9081,用于当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,查找与目标标签信息匹配的预设备忘信息并向所述耳机发送。In the embodiment of the present invention, the memo information processing module 9081 is configured to search for pre-device memo information matching the target tag information when the voice recognition result includes the memo information that characterizes the need to find target tag information. Send to the headset.
在本发明实施例中,所述的交互装置还可以包括:In the embodiment of the present invention, the interaction device may further include:
对话语句获取模块914,用于从所述语音识别结果中获取对话语句;The dialogue sentence obtaining module 914 is configured to obtain the dialogue sentence from the speech recognition result;
答复语句发送模块915,用于生成与所述对话语句匹配的答复语句并向所述耳机发送,所述耳机用于播放所述与所述对话语句匹配的答复语句。The reply sentence sending module 915 is configured to generate a reply sentence matching the dialogue sentence and send it to the earphone, and the earphone is used to play the reply sentence matching the dialogue sentence.
在本发明实施例中,所述答复语句发送模块915,用于获取所述耳机检测的用户方位信息;根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并向所述耳机发送。In the embodiment of the present invention, the reply sentence sending module 915 is used to obtain the user location information detected by the headset; according to the user location information and the dialogue sentence, generate a reply sentence for voice navigation and send the answer to the The headset is sent.
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。The device embodiments described above are merely illustrative. The modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in One place, or it can be distributed to multiple network modules. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement without creative work.
图10是根据一示例性实施例示出的一种用于交互的耳机1000的结构框图。参照图10,耳机1000可以包括以下一个或多个组件:处理组件1002,存储器1004,电力组件1006,多媒体组件1008,音频组件1010,输入/输出(I/O)的接口1012,传感器组件1014,以及通信组件1016。Fig. 10 is a structural block diagram showing a headset 1000 for interaction according to an exemplary embodiment. 10, the headset 1000 may include one or more of the following components: a processing component 1002, a memory 1004, a power component 1006, a multimedia component 1008, an audio component 1010, an input/output (I/O) interface 1012, a sensor component 1014, And communication component 1016.
处理组件1002通常控制耳机1000的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理元件1002可以包括一个或多个处理器1020来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件1002可以包括一个或多个模块,便于处理组件1002和其他组件之间的交互。例如,处理部件1002可以包括多媒体模块,以方便多媒体组件1008和处理组件1002之间的交互。The processing component 1002 generally controls the overall operations of the headset 1000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 1002 may include one or more processors 1020 to execute instructions to complete all or part of the steps of the foregoing method. In addition, the processing component 1002 may include one or more modules to facilitate the interaction between the processing component 1002 and other components. For example, the processing component 1002 may include a multimedia module to facilitate the interaction between the multimedia component 1008 and the processing component 1002.
存储器1004被配置为存储各种类型的数据以支持在耳机1000的操作。这些数据的示例包括用于在耳机1000上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器1004可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。The memory 1004 is configured to store various types of data to support operations on the headset 1000. Examples of these data include instructions for any application or method operated on the headset 1000, contact data, phone book data, messages, pictures, videos, etc. The memory 1004 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
电力组件1006为耳机1000的各种组件提供电力。电力组件1006可以包括电源管理系统,一个或多个电源,及其他与为耳机1000生成、管理和分配电力相关联的组件。The power component 1006 provides power to various components of the earphone 1000. The power component 1006 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the headset 1000.
多媒体组件1008包括在所述耳机1000和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当耳机1000处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。The multimedia component 1008 includes a screen that provides an output interface between the headset 1000 and the user. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user. The touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the headset 1000 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
音频组件1010被配置为输出和/或输入音频信号。例如,音频组件1010包括一个麦克风(MIC),当耳机1000处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器1004或经由通信组件1016发送。在一些实施例中,音频组件1010还包括一个扬声器,用于输出音频信号。The audio component 1010 is configured to output and/or input audio signals. For example, the audio component 1010 includes a microphone (MIC), and when the headset 1000 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal. The received audio signal may be further stored in the memory 1004 or transmitted via the communication component 1016. In some embodiments, the audio component 1010 further includes a speaker for outputting audio signals.
I/O接口1012为处理组件1002和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。The I/O interface 1012 provides an interface between the processing component 1002 and a peripheral interface module. The above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
传感器组件1014包括一个或多个传感器,用于为耳机1000提供各个方面的状态评估。例如,传感器组件1014可以检测到耳机1000的打开/关闭状态,组件的相对定位,例如所述组件为耳机1000的显示器和小键盘,传感器组件1014还可以检测耳机1000或耳机1000一个组件的位置改变,用户与耳机1000接触的存在或不存在,耳机1000方位或加速/减速和耳机1000的温度变化。传感器组件1014可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件1014还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件1014还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。The sensor component 1014 includes one or more sensors for providing the earphone 1000 with various aspects of state evaluation. For example, the sensor component 1014 can detect the on/off state of the headset 1000 and the relative positioning of the components. For example, the component is the display and the keypad of the headset 1000. The sensor component 1014 can also detect the position change of the headset 1000 or a component of the headset 1000. , The presence or absence of contact between the user and the headset 1000, the orientation or acceleration/deceleration of the headset 1000, and the temperature change of the headset 1000. The sensor assembly 1014 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact. The sensor component 1014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor component 1014 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
通信组件1016被配置为便于耳机1000和其他设备之间有线或无线方式的通信。耳机1000可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信部件1014经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信部件1014还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。The communication component 1016 is configured to facilitate wired or wireless communication between the headset 1000 and other devices. The headset 1000 can be connected to a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1014 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1014 further includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
在示例性实施例中,耳机1000可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。In an exemplary embodiment, the headset 1000 may be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component is used to implement the above method.
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器1004,上述指令可由耳机1000的处理器1020执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium including instructions, such as the memory 1004 including instructions, and the foregoing instructions may be executed by the processor 1020 of the headset 1000 to complete the foregoing method. For example, the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
一种耳机,包括有存储器,以及一个或者一个以上的程序,其中一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行所述一个或者一个以上程序包含用于进行以下操作的指令:A headset includes a memory and one or more programs, where one or more programs are stored in the memory, and are configured to be executed by one or more processors, including for performing Instructions for the following operations:
向服务器发送用户语音,并从所述服务器获取所述用户语音的语音识别结果;Sending a user voice to a server, and obtaining a voice recognition result of the user voice from the server;
调用交互助手根据所述语音识别结果执行交互操作。The interactive assistant is invoked to perform interactive operations according to the voice recognition result.
可选地,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:Optionally, the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
根据所述语音识别结果唤醒所述交互助手;Wake up the interactive assistant according to the voice recognition result;
获取用户状态;Get user status;
调用所述交互助手根据所述用户状态推荐歌曲或播放歌曲。Invoke the interactive assistant to recommend songs or play songs according to the user status.
可选地,所述耳机具有重力传感器,所述获取用户状态,包括:Optionally, the headset has a gravity sensor, and the acquiring the user status includes:
获取所述重力传感器检测的传感数据,根据所述传感数据确定用户状态。Acquire the sensing data detected by the gravity sensor, and determine the user state according to the sensing data.
可选地,所述调用所述交互助手根据所述用户状态推荐歌曲,包括:Optionally, the invoking the interactive assistant to recommend songs according to the user status includes:
向所述服务器发送所述用户状态;Sending the user status to the server;
接收所述服务器发送的推荐歌曲并调用所述交互助手向用户推荐,所述推荐歌曲为由所述服务器查找的与所述用户状态匹配的歌曲。Receive the recommended song sent by the server and call the interactive assistant to recommend to the user, the recommended song is a song searched by the server that matches the user's status.
可选地,所述调用所述交互助手根据所述用户状态播放歌曲,包括:Optionally, the invoking the interactive assistant to play the song according to the user status includes:
向所述服务器发送所述用户状态;Sending the user status to the server;
接收所述服务器发送的音效调整后的预设歌曲并调用所述交互助手播放;音效调整后的所述预设歌曲为由所述服务器确定与所述用户状态匹配的音效,并将预设歌曲调整为所述音效得到。Receive the preset song after adjusting the sound effect sent by the server and call the interactive assistant to play it; the preset song after the sound effect adjustment is the sound effect determined by the server to match the user status, and the preset song Adjusted to the sound effect.
可选地,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:Optionally, the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放。Invoke the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
可选地,所述调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放,包括:Optionally, the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result, includes:
调用所述交互助手根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并播放。Invoke the interactive assistant to recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain and play the pre-device memo information according to the voice recognition result.
可选地,所述调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放,包括:Optionally, the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or to obtain and play the recorded information according to the voice recognition result, includes:
调用所述交互助手根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并播放。The interactive assistant is called to recognize and record a target voice from the user's voice according to the voice recognition result, or obtain and play the recorded target voice according to the voice recognition result.
可选地,还包含进行执行如下操作的指令:向所述服务器发送已记录的信息,和/或,获取所述服务器已记录的信息并记录。Optionally, it also includes instructions for performing the following operations: sending the recorded information to the server, and/or obtaining and recording the recorded information by the server.
可选地,还包含进行执行如下操作的指令:在记录备忘信息之后,生成针对所述备忘信息的提醒事件。Optionally, it further includes an instruction to perform the following operations: after the memo information is recorded, a reminder event for the memo information is generated.
可选地,还包含进行执行如下操作的指令:从所述服务器获取针对备忘信息的预设提醒事件。Optionally, it further includes an instruction to perform the following operations: obtaining a preset reminder event for the memo information from the server.
可选地,还包含进行执行如下操作的指令:当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。Optionally, it also includes an instruction to perform the following operations: when the trigger condition of the preset reminder event is met, call the interactive assistant to obtain the memo information corresponding to the preset reminder event and play it.
可选地,所述根据所述语音识别结果获取预设备忘信息并播放,包括:Optionally, the acquiring and playing the pre-device forget information according to the voice recognition result includes:
从预设备忘信息中查找与所述语音识别结果匹配的信息;Searching for information that matches the voice recognition result from the pre-device forget information;
调用所述交互助手播放所述与所述语音识别结果匹配的信息。Invoke the interactive assistant to play the information matching the voice recognition result.
可选地,还包含进行执行如下操作的指令:获取由所述服务器对备忘信息进行语义分析得到的语义分析结果;Optionally, it also includes instructions for performing the following operations: obtaining a semantic analysis result obtained by the server performing semantic analysis on the memo information;
根据语义分析结果,对所述备忘信息生成标签信息。According to the semantic analysis result, label information is generated for the memo information.
可选地,所述根据所述语音识别结果获取预设备忘信息并播放,包括:Optionally, the acquiring and playing the pre-device forget information according to the voice recognition result includes:
当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,调用所述交互助手查找与目标标签信息匹配的预设备忘信息并播放。When the voice recognition result includes the memo information that characterizes the need to find target tag information, the interactive assistant is called to find and play the pre-device forget information that matches the target tag information.
可选地,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:Optionally, the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
从所述语音识别结果中获取对话语句;Obtaining a dialogue sentence from the speech recognition result;
调用所述交互助手生成与所述对话语句匹配的答复语句并播放。Invoke the interactive assistant to generate a reply sentence matching the dialogue sentence and play it.
可选地,所述调用所述交互助手,生成与所述对话语句匹配的答复语句并播放,包括:Optionally, the invoking the interactive assistant to generate and play a reply sentence matching the dialogue sentence includes:
获取用户方位信息;Obtain user location information;
调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放。The interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence.
可选地,所述耳机具有方位传感器,所述获取用户方位信息,包括:Optionally, the headset has an orientation sensor, and the acquiring user orientation information includes:
获取所述方位传感器检测的用户方位信息。Obtain user position information detected by the position sensor.
可选地,所述调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放,包括:Optionally, the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
获取用户地理位置信息;Obtain user geographic location information;
调用所述交互助手根据所述用户方位信息、所述对话语句和用户地理位置信息,生成用于语音导航的答复语句并播放。The interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information, the dialogue sentence and the user's geographic location information.
可选地,所述调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放,包括:Optionally, the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
向所述服务器发送导航查询信息;所述导航查询信息包括所述用户方位信息和所述对话语句;Sending navigation query information to the server; the navigation query information includes the user location information and the dialogue sentence;
接收所述服务器发送的用于语音导航的答复语句并播放,所述用于语音导航的答复语句由所述服务器根据所述用户方位信息和所述对话语句查询生成。The reply sentence for voice navigation sent by the server is received and played. The reply sentence for voice navigation is generated by the server according to the user location information and the dialog sentence query.
一种非临时性计算机可读存储介质,当所述存储介质中的指令由耳机的处理器执行时,使得耳机能够执行一种交互方法,所述方法包括:A non-transitory computer-readable storage medium, when instructions in the storage medium are executed by a processor of the headset, the headset can execute an interactive method, the method comprising:
向服务器发送用户语音,并从所述服务器获取所述用户语音的语音识别结果;Sending a user voice to a server, and obtaining a voice recognition result of the user voice from the server;
调用交互助手根据所述语音识别结果执行交互操作。The interactive assistant is invoked to perform interactive operations according to the voice recognition result.
可选地,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:Optionally, the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
根据所述语音识别结果唤醒所述交互助手;Wake up the interactive assistant according to the voice recognition result;
获取用户状态;Get user status;
调用所述交互助手根据所述用户状态推荐歌曲或播放歌曲。Invoke the interactive assistant to recommend songs or play songs according to the user status.
可选地,所述耳机具有重力传感器,所述获取用户状态,包括:Optionally, the headset has a gravity sensor, and the acquiring the user status includes:
获取所述重力传感器检测的传感数据,根据所述传感数据确定用户状态。Acquire the sensing data detected by the gravity sensor, and determine the user state according to the sensing data.
可选地,所述调用所述交互助手根据所述用户状态推荐歌曲,包括:Optionally, the invoking the interactive assistant to recommend songs according to the user status includes:
向所述服务器发送所述用户状态;Sending the user status to the server;
接收所述服务器发送的推荐歌曲并调用所述交互助手向用户推荐,所述推荐歌曲为由所述服务器查找的与所述用户状态匹配的歌曲。Receive the recommended song sent by the server and call the interactive assistant to recommend to the user, the recommended song is a song searched by the server that matches the user's status.
可选地,所述调用所述交互助手根据所述用户状态播放歌曲,包括:Optionally, the invoking the interactive assistant to play the song according to the user status includes:
向所述服务器发送所述用户状态;Sending the user status to the server;
接收所述服务器发送的音效调整后的预设歌曲并调用所述交互助手播放;音效调整后的所述预设歌曲为由所述服务器确定与所述用户状态匹配的音效,并将预设歌曲调整为所述音效得到。Receive the preset song after adjusting the sound effect sent by the server and call the interactive assistant to play it; the preset song after the sound effect adjustment is the sound effect determined by the server to match the user status, and the preset song Adjusted to the sound effect.
可选地,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:Optionally, the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放。Invoke the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
可选地,所述调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放,包括:Optionally, the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result, includes:
调用所述交互助手根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并播放。Invoke the interactive assistant to recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain and play the pre-device memo information according to the voice recognition result.
可选地,所述调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放,包括:Optionally, the invoking the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result, includes:
调用所述交互助手根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并播放。The interactive assistant is called to recognize and record a target voice from the user's voice according to the voice recognition result, or obtain and play the recorded target voice according to the voice recognition result.
可选地,还包括:向所述服务器发送已记录的信息,和/或,获取所述服务器已记录的信息并记录。Optionally, the method further includes: sending the recorded information to the server, and/or obtaining and recording the recorded information of the server.
可选地,还包括:在记录备忘信息之后,生成针对所述备忘信息的提醒事件。Optionally, the method further includes: after the memo information is recorded, generating a reminder event for the memo information.
可选地,还包括:从所述服务器获取针对备忘信息的预设提醒事件。Optionally, the method further includes: obtaining a preset reminder event for the memo information from the server.
可选地,还包括:当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。Optionally, the method further includes: when the trigger condition of the preset reminder event is met, invoking the interactive assistant to obtain and play the memo information corresponding to the preset reminder event.
可选地,所述根据所述语音识别结果获取预设备忘信息并播放,包括:Optionally, the acquiring and playing the pre-device forget information according to the voice recognition result includes:
从预设备忘信息中查找与所述语音识别结果匹配的信息;Searching for information that matches the voice recognition result from the pre-device forget information;
调用所述交互助手播放所述与所述语音识别结果匹配的信息。Invoke the interactive assistant to play the information matching the voice recognition result.
可选地,还包括:获取由所述服务器对备忘信息进行语义分析得到的语义分析结果;Optionally, the method further includes: obtaining a semantic analysis result obtained by the server performing semantic analysis on the memo information;
根据语义分析结果,对所述备忘信息生成标签信息。According to the semantic analysis result, label information is generated for the memo information.
可选地,所述根据所述语音识别结果获取预设备忘信息并播放,包括:Optionally, the acquiring and playing the pre-device forget information according to the voice recognition result includes:
当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,调用所述交互助手查找与目标标签信息匹配的预设备忘信息并播放。When the voice recognition result includes the memo information that characterizes the need to find target tag information, the interactive assistant is called to find and play the pre-device forget information that matches the target tag information.
可选地,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:Optionally, the invoking the interactive assistant to perform an interactive operation according to the voice recognition result includes:
从所述语音识别结果中获取对话语句;Obtaining a dialogue sentence from the speech recognition result;
调用所述交互助手生成与所述对话语句匹配的答复语句并播放。Invoke the interactive assistant to generate a reply sentence matching the dialogue sentence and play it.
可选地,所述调用所述交互助手,生成与所述对话语句匹配的答复语句并播放,包括:Optionally, the invoking the interactive assistant to generate and play a reply sentence matching the dialogue sentence includes:
获取用户方位信息;Obtain user location information;
调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放。The interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence.
可选地,所述耳机具有方位传感器,所述获取用户方位信息,包括:Optionally, the headset has an orientation sensor, and the acquiring user orientation information includes:
获取所述方位传感器检测的用户方位信息。Obtain user position information detected by the position sensor.
可选地,所述调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放,包括:Optionally, the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
获取用户地理位置信息;Obtain user geographic location information;
调用所述交互助手根据所述用户方位信息、所述对话语句和用户地理位置信息,生成用于语音导航的答复语句并播放。The interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information, the dialogue sentence and the user's geographic location information.
可选地,所述调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放,包括:Optionally, the invoking the interactive assistant to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence includes:
向所述服务器发送导航查询信息;所述导航查询信息包括所述用户方位信息和所述对话语句;Sending navigation query information to the server; the navigation query information includes the user location information and the dialogue sentence;
接收所述服务器发送的用于语音导航的答复语句并播放,所述用于语音导航的答复语句由所述服务器根据所述用户方位信息和所述对话语句查询生成。The reply sentence for voice navigation sent by the server is received and played. The reply sentence for voice navigation is generated by the server according to the user location information and the dialog sentence query.
图11是本发明根据另一示例性实施例示出的一种用于交互的服务器1100的结构示意图。该服务器可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上中央处理器(central processing units,CPU)1122(例如,一个或一个以上处理器)和存储器1132,一个或一个以上存储应用程序1142或数据1144的存储介质1130(例如一个或一个以上海量存储设备)。其中,存储器1132和存储介质1130可以是短暂存储或持久存储。存储在存储介质1130的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对服务器中的一系列指令操作。更进一步地,中央处理器1122可以设置为与 存储介质1130通信,在服务器上执行存储介质1130中的一系列指令操作。Fig. 11 is a schematic structural diagram showing a server 1100 for interaction according to another exemplary embodiment of the present invention. The server may have relatively large differences due to different configurations or performance, and may include one or more central processing units (CPU) 1122 (for example, one or more processors) and memory 1132, one or more A storage medium 1130 (for example, one or a storage device with a large amount of storage) for storing application programs 1142 or data 1144. Among them, the memory 1132 and the storage medium 1130 may be short-term storage or persistent storage. The program stored in the storage medium 1130 may include one or more modules (not shown in the figure), and each module may include a series of command operations on the server. Furthermore, the central processing unit 1122 may be configured to communicate with the storage medium 1130, and execute a series of instruction operations in the storage medium 1130 on the server.
服务器还可以包括一个或一个以上电源1126,一个或一个以上有线或无线网络接口1150,一个或一个以上输入输出接口1158,一个或一个以上键盘1156,和/或,一个或一个以上操作系统1141,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM等等。The server may also include one or more power supplies 1126, one or more wired or wireless network interfaces 1150, one or more input and output interfaces 1158, one or more keyboards 1156, and/or one or more operating systems 1141, For example, Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM and so on.
一种服务器,包括有存储器,以及一个或者一个以上的程序,其中一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行所述一个或者一个以上程序包含用于进行以下操作的指令:A server including a memory and one or more programs, wherein one or more programs are stored in the memory, and are configured to be executed by one or more processors, including for performing Instructions for the following operations:
接收耳机发送的用户语音,并对用户语音进行识别,得到语音识别结果;Receive the user's voice sent by the headset, and recognize the user's voice to obtain the voice recognition result;
向所述耳机发送所述语音识别结果,所述耳机用于调用交互助手根据所述语音识别结果执行交互操作。The voice recognition result is sent to the earphone, and the earphone is used for invoking an interactive assistant to perform an interactive operation according to the voice recognition result.
可选地,还包含进行执行如下操作的指令:获取所述耳机检测的用户状态;Optionally, it also includes instructions for performing the following operations: acquiring the user status detected by the headset;
查找与所述用户状态匹配的推荐歌曲,并向所述耳机发送;所述耳机用于调用所述交互助手向用户推荐所述推荐歌曲。Find a recommended song matching the user's state and send it to the headset; the headset is used to call the interactive assistant to recommend the recommended song to the user.
可选地,还包含进行执行如下操作的指令:获取所述耳机检测的用户状态;Optionally, it also includes instructions for performing the following operations: acquiring the user status detected by the headset;
确定与所述用户状态匹配的音效;Determining a sound effect matching the user status;
将预设歌曲调整为所述音效,并向所述耳机发送调整音效后的所述预设歌曲,所述耳机用于调用所述交互助手播放调整音效后的所述预设歌曲。The preset song is adjusted to the sound effect, and the preset song with the adjusted sound effect is sent to the earphone, and the earphone is used for invoking the interactive assistant to play the preset song with the adjusted sound effect.
可选地,还包含进行执行如下操作的指令:根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,所述耳机用于播放所述已记录的信息。Optionally, it further includes instructions for performing the following operations: recognizing and recording information from the user voice according to the voice recognition result, or obtaining recorded information according to the voice recognition result and sending it to the headset, The earphone is used to play the recorded information.
可选地,所述根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,包括:Optionally, the recognizing and recording information from the user's voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset, includes:
根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并向所述耳机发送。Recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain pre-device memo information according to the voice recognition result and send it to the headset.
可选地,所述根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,包括:Optionally, the recognizing and recording information from the user's voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset, includes:
根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并向所述耳机发送。According to the voice recognition result, a target voice is recognized and recorded from the user voice, or a recorded target voice is acquired and sent to the headset according to the voice recognition result.
可选地,还包含进行执行如下操作的指令:向所述耳机发送从所述用户语音识别的信息,和/或,获取所述耳机已记录的信息并记录。Optionally, it further includes instructions for performing the following operations: sending information recognized from the user's voice to the headset, and/or acquiring and recording the information recorded by the headset.
可选地,还包含进行执行如下操作的指令:在记录备忘信息之后,生成针对所述备忘信息的提醒事件;Optionally, it also includes instructions for performing the following operations: after recording the memo information, generating a reminder event for the memo information;
向所述耳机所述提醒事件,所述耳机用于当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。For the reminder event to the earphone, the earphone is used to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
可选地,还包含进行执行如下操作的指令:对备忘信息进行语义分析,得到语义分析结果;Optionally, it also contains instructions for performing the following operations: performing semantic analysis on the memo information to obtain a semantic analysis result;
根据语义分析结果,对所述备忘信息生成标签信息。According to the semantic analysis result, label information is generated for the memo information.
可选地,所述根据所述语音识别结果获取预设备忘信息并向所述耳机发送,包括:Optionally, the acquiring pre-device forget information according to the voice recognition result and sending it to the headset includes:
当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,查找与目标标签信息匹配的预设备忘信息并向所述耳机发送。When the voice recognition result includes memo information that characterizes the need to find target tag information, search for pre-device forget information that matches the target tag information and send it to the headset.
可选地,还包含进行执行如下操作的指令:从所述语音识别结果中获取对话语句;Optionally, it also includes instructions for performing the following operations: obtaining a dialogue sentence from the voice recognition result;
生成与所述对话语句匹配的答复语句并向所述耳机发送,所述耳机用于播放所述与所述对话语句匹配的答复语句。A reply sentence matching the dialogue sentence is generated and sent to the earphone, and the earphone is used to play the reply sentence matching the dialogue sentence.
可选地,所述生成与所述对话语句匹配的答复语句并向所述耳机发送,包括:Optionally, the generating and sending a reply sentence matching the conversation sentence to the headset includes:
获取所述耳机检测的用户方位信息;Acquiring user location information detected by the headset;
根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并向所述耳机发送。According to the user location information and the dialogue sentence, a reply sentence for voice navigation is generated and sent to the headset.
一种非临时性计算机可读存储介质,当所述存储介质中的指令由服务器的处理器执行时,使得服务器能够执行一种交互方法,所述方法包括:A non-transitory computer-readable storage medium. When instructions in the storage medium are executed by a processor of a server, the server can execute an interactive method, the method including:
接收耳机发送的用户语音,并对用户语音进行识别,得到语音识别结果;Receive the user's voice sent by the headset, and recognize the user's voice to obtain the voice recognition result;
向所述耳机发送所述语音识别结果,所述耳机用于调用交互助手根据所述语音识别结果执行交互操作。The voice recognition result is sent to the earphone, and the earphone is used for invoking an interactive assistant to perform an interactive operation according to the voice recognition result.
可选地,还包括:Optionally, it also includes:
获取所述耳机检测的用户状态;Acquiring the user status detected by the headset;
查找与所述用户状态匹配的推荐歌曲,并向所述耳机发送;所述耳机用于调用所述交互助手向用户推荐所述推荐歌曲。Find a recommended song matching the user's state and send it to the headset; the headset is used to call the interactive assistant to recommend the recommended song to the user.
可选地,还包括:Optionally, it also includes:
获取所述耳机检测的用户状态;Acquiring the user status detected by the headset;
确定与所述用户状态匹配的音效;Determining a sound effect matching the user status;
将预设歌曲调整为所述音效,并向所述耳机发送调整音效后的所述预设歌曲,所述耳机用于调用所述交互助手播放调整音效后的所述预设歌曲。The preset song is adjusted to the sound effect, and the preset song with the adjusted sound effect is sent to the earphone, and the earphone is used for invoking the interactive assistant to play the preset song with the adjusted sound effect.
可选地,还包括:Optionally, it also includes:
根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,所述耳机用于播放所述已记录的信息。Recognize and record information from the user's voice according to the voice recognition result, or obtain and send the recorded information to the earphone according to the voice recognition result, and the earphone is used to play the recorded information.
可选地,所述根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,包括:Optionally, the recognizing and recording information from the user's voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset, includes:
根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并向所述耳机发送。Recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain pre-device memo information according to the voice recognition result and send it to the headset.
可选地,所述根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,包括:Optionally, the recognizing and recording information from the user's voice according to the voice recognition result, or obtaining the recorded information according to the voice recognition result and sending it to the headset, includes:
根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并向所述耳机发送。According to the voice recognition result, a target voice is recognized and recorded from the user voice, or a recorded target voice is acquired and sent to the headset according to the voice recognition result.
可选地,还包括:Optionally, it also includes:
向所述耳机发送从所述用户语音识别的信息,和/或,获取所述耳机已记录的信息并记录。Send the information recognized from the user's voice to the earphone, and/or obtain and record the information recorded by the earphone.
可选地,还包括:Optionally, it also includes:
在记录备忘信息之后,生成针对所述备忘信息的提醒事件;After the memo information is recorded, a reminder event for the memo information is generated;
向所述耳机所述提醒事件,所述耳机用于当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。For the reminder event to the earphone, the earphone is used to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
可选地,还包括:Optionally, it also includes:
对备忘信息进行语义分析,得到语义分析结果;Perform semantic analysis on the memo information to obtain the semantic analysis result;
根据语义分析结果,对所述备忘信息生成标签信息。According to the semantic analysis result, label information is generated for the memo information.
可选地,所述根据所述语音识别结果获取预设备忘信息并向所述耳机发送,包括:Optionally, the acquiring pre-device forget information according to the voice recognition result and sending it to the headset includes:
当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,查找与目标标签信息匹配的预设备忘信息并向所述耳机发送。When the voice recognition result includes memo information that characterizes the need to find target tag information, search for pre-device forget information that matches the target tag information and send it to the headset.
可选地,还包括:Optionally, it also includes:
从所述语音识别结果中获取对话语句;Obtaining a dialogue sentence from the speech recognition result;
生成与所述对话语句匹配的答复语句并向所述耳机发送,所述耳机用于播放所述与所述对话语句匹配的答复语句。A reply sentence matching the dialogue sentence is generated and sent to the earphone, and the earphone is used to play the reply sentence matching the dialogue sentence.
可选地,所述生成与所述对话语句匹配的答复语句并向所述耳机发送,包括:Optionally, the generating and sending a reply sentence matching the conversation sentence to the headset includes:
获取所述耳机检测的用户方位信息;Acquiring user location information detected by the headset;
根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并向所述耳机发送。According to the user location information and the dialogue sentence, a reply sentence for voice navigation is generated and sent to the headset.
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。The various embodiments in this specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments, and the same or similar parts between the various embodiments can be referred to each other.
本发明实施例是参照根据本发明实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The embodiments of the present invention are described with reference to the flowcharts and/or block diagrams of the methods, terminal devices (systems), and computer program products according to the embodiments of the present invention. It should be understood that each process and/or block in the flowchart and/or block diagram, and the combination of processes and/or blocks in the flowchart and/or block diagram can be implemented by computer program instructions. These computer program instructions can be provided to the processors of general-purpose computers, special-purpose computers, embedded processors, or other programmable data processing terminal equipment to generate a machine, so that the instructions executed by the processor of the computer or other programmable data processing terminal equipment A device for realizing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram is generated.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing terminal equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device. The instruction device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing terminal equipment, so that a series of operation steps are executed on the computer or other programmable terminal equipment to produce computer-implemented processing, so that the computer or other programmable terminal equipment The instructions executed above provide steps for implementing functions specified in a flow or multiple flows in the flowchart and/or a block or multiple blocks in the block diagram.
尽管已描述了本发明实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明实施例范围的所有变更和修改。Although the preferred embodiments of the embodiments of the present invention have been described, those skilled in the art can make additional changes and modifications to these embodiments once they learn the basic creative concept. Therefore, the appended claims are intended to be interpreted as including the preferred embodiments and all changes and modifications falling within the scope of the embodiments of the present invention.
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个......”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。Finally, it should be noted that in this article, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply these entities Or there is any such actual relationship or sequence between operations. Moreover, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or terminal device including a series of elements not only includes those elements, but also includes those elements that are not explicitly listed. Other elements listed, or also include elements inherent to this process, method, article, or terminal device. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other same elements in the process, method, article or terminal device that includes the element.
以上对本发明所提供的一种交互方法、一种交互装置、一种耳机和一种服务器,进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。The above describes in detail an interaction method, an interaction device, a headset, and a server provided by the present invention. In this article, specific examples are used to illustrate the principles and implementation modes of the present invention. The above embodiments The description is only used to help understand the method and core idea of the present invention; at the same time, for those of ordinary skill in the art, according to the idea of the present invention, there will be changes in the specific implementation and the scope of application. In summary As mentioned, the content of this specification should not be construed as a limitation of the present invention.

Claims (100)

  1. 一种交互方法,其特征在于,应用于耳机,所述耳机与服务器通信连接,所述耳机具有交互助手,所述方法包括:An interaction method, characterized in that it is applied to a headset, the headset is communicatively connected with a server, the headset has an interactive assistant, and the method includes:
    所述耳机向所述服务器发送用户语音,并从所述服务器获取所述用户语音的语音识别结果;Sending, by the headset, a user voice to the server, and obtaining a voice recognition result of the user voice from the server;
    调用所述交互助手根据所述语音识别结果执行交互操作。Invoke the interactive assistant to perform an interactive operation according to the voice recognition result.
  2. 根据权利要求1所述的方法,其特征在于,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:The method according to claim 1, wherein the invoking the interactive assistant to perform an interactive operation according to the voice recognition result comprises:
    根据所述语音识别结果唤醒所述交互助手;Wake up the interactive assistant according to the voice recognition result;
    获取用户状态;Get user status;
    调用所述交互助手根据所述用户状态推荐歌曲或播放歌曲。Invoke the interactive assistant to recommend songs or play songs according to the user status.
  3. 根据权利要求2所述的方法,其特征在于,所述耳机具有重力传感器,所述获取用户状态,包括:The method according to claim 2, wherein the headset has a gravity sensor, and the acquiring the user status includes:
    获取所述重力传感器检测的传感数据,根据所述传感数据确定用户状态。Acquire the sensing data detected by the gravity sensor, and determine the user state according to the sensing data.
  4. 根据权利要求2所述的方法,其特征在于,所述调用所述交互助手根据所述用户状态推荐歌曲,包括:The method according to claim 2, wherein said invoking said interactive assistant to recommend songs according to said user status comprises:
    向所述服务器发送所述用户状态;Sending the user status to the server;
    接收所述服务器发送的推荐歌曲并调用所述交互助手向用户推荐,所述推荐歌曲为由所述服务器查找的与所述用户状态匹配的歌曲。Receive the recommended song sent by the server and call the interactive assistant to recommend to the user, the recommended song is a song searched by the server that matches the user's status.
  5. 根据权利要求2所述的方法,其特征在于,所述调用所述交互助手根据所述用户状态播放歌曲,包括:The method according to claim 2, wherein the invoking the interactive assistant to play a song according to the user status comprises:
    向所述服务器发送所述用户状态;Sending the user status to the server;
    接收所述服务器发送的音效调整后的预设歌曲并调用所述交互助手播放;音效调整后的所述预设歌曲为由所述服务器确定与所述用户状态匹配的音效,并将预设歌曲调整为所述音效得到。Receive the preset song after adjusting the sound effect sent by the server and call the interactive assistant to play it; the preset song after the sound effect adjustment is the sound effect determined by the server to match the user status, and the preset song Adjusted to the sound effect.
  6. 根据权利要求1所述的方法,其特征在于,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:The method according to claim 1, wherein the invoking the interactive assistant to perform an interactive operation according to the voice recognition result comprises:
    调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放。Invoke the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
  7. 根据权利要求6所述的方法,其特征在于,所述调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放,包括:The method according to claim 6, wherein said invoking said interactive assistant to recognize and record information from said user voice according to said voice recognition result, or obtain recorded information according to said voice recognition result And play, including:
    调用所述交互助手根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并播放。Invoke the interactive assistant to recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain and play the pre-device memo information according to the voice recognition result.
  8. 根据权利要求6所述的方法,其特征在于,所述调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放,包括:The method according to claim 6, wherein said invoking said interactive assistant to recognize and record information from said user voice according to said voice recognition result, or obtain recorded information according to said voice recognition result And play, including:
    调用所述交互助手根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并播放。The interactive assistant is called to recognize and record a target voice from the user's voice according to the voice recognition result, or obtain and play the recorded target voice according to the voice recognition result.
  9. 根据权利要求6所述的方法,其特征在于,还包括:The method according to claim 6, further comprising:
    向所述服务器发送已记录的信息,和/或,获取所述服务器已记录的信息并记录。Send the recorded information to the server, and/or obtain and record the recorded information of the server.
  10. 根据权利要求7所述的方法,其特征在于,还包括:The method according to claim 7, further comprising:
    在记录备忘信息之后,生成针对所述备忘信息的提醒事件。After the memo information is recorded, a reminder event for the memo information is generated.
  11. 根据权利要求8所述的方法,其特征在于,还包括:The method according to claim 8, further comprising:
    从所述服务器获取针对备忘信息的预设提醒事件。Obtain a preset reminder event for the memo information from the server.
  12. 根据权利要求10或11所述的方法,其特征在于,还包括:The method according to claim 10 or 11, further comprising:
    当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。When the trigger condition of the preset reminder event is met, the interactive assistant is called to obtain and play the memo information corresponding to the preset reminder event.
  13. 根据权利要求7所述的方法,其特征在于,所述根据所述语音识别结果获取预设备忘信息并播放,包括:The method according to claim 7, wherein the obtaining and playing the pre-device forget information according to the voice recognition result comprises:
    从预设备忘信息中查找与所述语音识别结果匹配的信息;Searching for information that matches the voice recognition result from the pre-device forget information;
    调用所述交互助手播放所述与所述语音识别结果匹配的信息。Invoke the interactive assistant to play the information matching the voice recognition result.
  14. 根据权利要求7所述的方法,其特征在于,还包括:The method according to claim 7, further comprising:
    获取由所述服务器对备忘信息进行语义分析得到的语义分析结果;Obtaining a semantic analysis result obtained by the server performing semantic analysis on the memo information;
    根据语义分析结果,对所述备忘信息生成标签信息。According to the semantic analysis result, label information is generated for the memo information.
  15. 根据权利要求14所述的方法,其特征在于,所述根据所述语音识别结果获取预设备忘信息并播放,包括:The method according to claim 14, wherein the acquiring and playing the pre-device forget information according to the voice recognition result comprises:
    当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,调用所述交互助手查找与目标标签信息匹配的预设备忘信息并播放。When the voice recognition result includes the memo information that characterizes the need to find target tag information, the interactive assistant is called to find and play the pre-device forget information that matches the target tag information.
  16. 根据权利要求1所述的方法,其特征在于,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:The method according to claim 1, wherein the invoking the interactive assistant to perform an interactive operation according to the voice recognition result comprises:
    从所述语音识别结果中获取对话语句;Obtaining a dialogue sentence from the speech recognition result;
    调用所述交互助手生成与所述对话语句匹配的答复语句并播放。The interactive assistant is called to generate and play a reply sentence matching the dialogue sentence.
  17. 根据权利要求16所述的方法,其特征在于,所述调用所述交互助手,生成与所述对话语句匹配的答复语句并播放,包括:The method according to claim 16, wherein the invoking the interactive assistant to generate and play a reply sentence matching the dialogue sentence comprises:
    获取用户方位信息;Obtain user location information;
    调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放。The interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence.
  18. 根据权利要求17所述的方法,其特征在于,所述耳机具有方位传感器,所述获取用户方位信息,包括:The method according to claim 17, wherein the headset has an orientation sensor, and the acquiring user orientation information comprises:
    获取所述方位传感器检测的用户方位信息。Obtain user position information detected by the position sensor.
  19. 根据权利要求17所述的方法,其特征在于,所述调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放,包括:The method according to claim 17, wherein said invoking said interactive assistant to generate and play a reply sentence for voice navigation according to said user location information and said dialogue sentence, comprising:
    获取用户地理位置信息;Obtain user geographic location information;
    调用所述交互助手根据所述用户方位信息、所述对话语句和用户地理位置信息,生成用于语音导航的答复语句并播放。The interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information, the dialogue sentence and the user's geographic location information.
  20. 根据权利要求17所述的方法,其特征在于,所述调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放,包括:The method according to claim 17, wherein said invoking said interactive assistant to generate and play a reply sentence for voice navigation according to said user location information and said dialogue sentence, comprising:
    向所述服务器发送导航查询信息;所述导航查询信息包括所述用户方位信息和所述对话语句;Sending navigation query information to the server; the navigation query information includes the user location information and the dialogue sentence;
    接收所述服务器发送的用于语音导航的答复语句并播放,所述用于语音导航的答复语句由所述服务器根据所述用户方位信息和所述对话语句查询生成。The reply sentence for voice navigation sent by the server is received and played. The reply sentence for voice navigation is generated by the server according to the user location information and the dialog sentence query.
  21. 一种交互方法,其特征在于,应用于服务器,所述服务器与耳机通信连接,所述耳机具有交互助手,所述方法包括:An interaction method, characterized in that it is applied to a server, the server is communicatively connected with a headset, the headset has an interactive assistant, and the method includes:
    所述服务器接收所述耳机发送的用户语音,并对用户语音进行识别,得到语音识别结果;The server receives the user voice sent by the headset, and recognizes the user voice to obtain a voice recognition result;
    向所述耳机发送所述语音识别结果,所述耳机用于调用所述交互助手根据所述语音识别结果执行交互操作。The voice recognition result is sent to the earphone, and the earphone is used to call the interactive assistant to perform an interactive operation according to the voice recognition result.
  22. 根据权利要求21所述的方法,其特征在于,还包括:The method according to claim 21, further comprising:
    获取所述耳机检测的用户状态;Acquiring the user status detected by the headset;
    查找与所述用户状态匹配的推荐歌曲,并向所述耳机发送;所述耳机用于调用所述交互助手向用户推荐所述推荐歌曲。Find a recommended song matching the user's state and send it to the headset; the headset is used to call the interactive assistant to recommend the recommended song to the user.
  23. 根据权利要求21所述的方法,其特征在于,还包括:The method according to claim 21, further comprising:
    获取所述耳机检测的用户状态;Acquiring the user status detected by the headset;
    确定与所述用户状态匹配的音效;Determining a sound effect matching the user status;
    将预设歌曲调整为所述音效,并向所述耳机发送调整音效后的所述预设歌曲,所述耳机用于调用所述交互助手播放调整音效后的所述预设歌曲。The preset song is adjusted to the sound effect, and the preset song with the adjusted sound effect is sent to the earphone, and the earphone is used for invoking the interactive assistant to play the preset song with the adjusted sound effect.
  24. 根据权利要求21所述的方法,其特征在于,还包括:The method according to claim 21, further comprising:
    根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,所述耳机用于播放所述已记录的信息。Recognize and record information from the user's voice according to the voice recognition result, or obtain and send the recorded information to the earphone according to the voice recognition result, and the earphone is used to play the recorded information.
  25. 根据权利要求24所述的方法,其特征在于,所述根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,包括:The method according to claim 24, wherein the information is recognized and recorded from the user voice according to the voice recognition result, or the recorded information is obtained according to the voice recognition result and sent to the headset Send, including:
    根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并向所述耳机发送。Recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain pre-device memo information according to the voice recognition result and send it to the headset.
  26. 根据权利要求24所述的方法,其特征在于,所述根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,包括:The method according to claim 24, wherein the information is recognized and recorded from the user voice according to the voice recognition result, or the recorded information is obtained according to the voice recognition result and sent to the headset Send, including:
    根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并向所述耳机发送。According to the voice recognition result, a target voice is recognized and recorded from the user voice, or a recorded target voice is acquired and sent to the headset according to the voice recognition result.
  27. 根据权利要求24所述的方法,其特征在于,还包括:The method according to claim 24, further comprising:
    向所述耳机发送从所述用户语音识别的信息,和/或,获取所述耳机已记录的信息并记录。Send the information recognized from the user's voice to the earphone, and/or obtain and record the information recorded by the earphone.
  28. 根据权利要求25所述的方法,其特征在于,还包括:The method according to claim 25, further comprising:
    在记录备忘信息之后,生成针对所述备忘信息的提醒事件;After the memo information is recorded, a reminder event for the memo information is generated;
    向所述耳机所述提醒事件,所述耳机用于当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。For the reminder event to the earphone, the earphone is used to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
  29. 根据权利要求25所述的方法,其特征在于,还包括:The method according to claim 25, further comprising:
    对备忘信息进行语义分析,得到语义分析结果;Perform semantic analysis on the memo information to obtain the semantic analysis result;
    根据语义分析结果,对所述备忘信息生成标签信息。According to the semantic analysis result, label information is generated for the memo information.
  30. 根据权利要求29所述的方法,其特征在于,所述根据所述语音识别结果获取预设备忘信息并向所述耳机发送,包括:The method according to claim 29, wherein the acquiring pre-device forget information according to the voice recognition result and sending it to the headset comprises:
    当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,查找与目标标签信息匹配的预设备忘信息并向所述耳机发送。When the voice recognition result includes memo information that characterizes the need to find target tag information, search for pre-device forget information that matches the target tag information and send it to the headset.
  31. 根据权利要求21所述的方法,其特征在于,还包括:The method according to claim 21, further comprising:
    从所述语音识别结果中获取对话语句;Obtaining a dialogue sentence from the speech recognition result;
    生成与所述对话语句匹配的答复语句并向所述耳机发送,所述耳机用于播放所述与所述对话语句匹配的答复语句。A reply sentence matching the dialogue sentence is generated and sent to the earphone, and the earphone is used to play the reply sentence matching the dialogue sentence.
  32. 根据权利要求31所述的方法,其特征在于,所述生成与所述对话语句匹配的答复语句并向所述耳机发送,包括:The method according to claim 31, wherein the generating and sending a reply sentence matching the conversation sentence to the headset comprises:
    获取所述耳机检测的用户方位信息;Acquiring user location information detected by the headset;
    根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并向所述耳机发送。According to the user location information and the dialogue sentence, a reply sentence for voice navigation is generated and sent to the headset.
  33. 一种交互装置,其特征在于,应用于耳机,所述耳机与服务器通信连接,所述耳机具有交互助手,所述装置包括:An interactive device, characterized in that it is applied to a headset, the headset is in communication connection with a server, the headset has an interactive assistant, and the device includes:
    语音识别结果获取模块,用于向所述服务器发送用户语音,并从所述服务器获取所述用户语音的语音识别结果;A voice recognition result obtaining module, configured to send a user voice to the server, and obtain a voice recognition result of the user voice from the server;
    第一交互模块,用于调用所述交互助手根据所述语音识别结果执行交互操作。The first interaction module is used to invoke the interaction assistant to perform an interaction operation according to the voice recognition result.
  34. 根据权利要求33所述的交互装置,其特征在于,所述第一交互模块包括:The interaction device according to claim 33, wherein the first interaction module comprises:
    唤醒模块,用于根据所述语音识别结果唤醒所述交互助手;The wake-up module is used to wake up the interactive assistant according to the voice recognition result;
    用户状态获取模块,用于获取用户状态;User status acquisition module, used to acquire user status;
    歌曲交互模块,用于调用所述交互助手根据所述用户状态推荐歌曲或播放歌曲。The song interaction module is used to call the interactive assistant to recommend songs or play songs according to the user status.
  35. 根据权利要求34所述的交互装置,其特征在于,所述耳机具有重力传感器,所述用户状态获取模块,用于获取所述重力传感器检测的传感数据,根据所述传感数据确定用户状态。The interaction device according to claim 34, wherein the headset has a gravity sensor, and the user status acquisition module is configured to acquire sensor data detected by the gravity sensor, and determine the user status according to the sensor data .
  36. 根据权利要求34所述的交互装置,其特征在于,所述歌曲交互模块,用于向所述服务器发送所述用户状态;接收所述服务器发送的推荐歌曲并调用所述交互助手向用户推荐,所述推荐歌曲为由所述服务器查找的与所述用户状态匹配的歌曲。The interaction device according to claim 34, wherein the song interaction module is configured to send the user status to the server; receive recommended songs sent by the server and call the interactive assistant to recommend to the user, The recommended song is a song found by the server that matches the status of the user.
  37. 根据权利要求34所述的交互装置,其特征在于,所述歌曲交互模块,用于向所述服务器发送所述用户状态;接收所述服务器发送的音效调整后的预设歌曲并调用所述交互助手播放;音效调整后的所述预设歌曲为由所述服务器确定与所述用户状态匹配的音效,并将预设歌曲调整为所述音效得到。The interaction device according to claim 34, wherein the song interaction module is configured to send the user status to the server; receive a preset song with adjusted sound effects sent by the server and call the interaction Assistant playing; the preset song after the sound effect adjustment is a sound effect determined by the server to match the user status, and the preset song is adjusted to the sound effect.
  38. 根据权利要求33所述的交互装置,其特征在于,所述第一交互模块包括:The interaction device according to claim 33, wherein the first interaction module comprises:
    第一记录交互模块,用于调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放。The first recording interaction module is configured to call the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
  39. 根据权利要求38所述的交互装置,其特征在于,所述第一记录交互模块,用于调用所述交互助手根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并播放。The interaction device according to claim 38, wherein the first recording interaction module is configured to call the interaction assistant to recognize and record memo information from the voice recognition result according to the voice recognition result, or Obtain and play the pre-device forget information according to the voice recognition result.
  40. 根据权利要求38所述的交互装置,其特征在于,所述第一记录交互模块,用于调用所述交互助手根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并播放。The interaction device of claim 38, wherein the first recording interaction module is configured to call the interaction assistant to recognize and record the target voice from the user voice according to the voice recognition result, or according to the voice recognition result. According to the voice recognition result, the recorded target voice is obtained and played.
  41. 根据权利要求38所述的交互装置,其特征在于,还包括:The interaction device according to claim 38, further comprising:
    第一记录信息传输模块,用于向所述服务器发送已记录的信息,和/或,获取所述服务器已记录的信息并记录。The first recorded information transmission module is configured to send recorded information to the server, and/or obtain and record the recorded information of the server.
  42. 根据权利要求39所述的交互装置,其特征在于,还包括:The interaction device of claim 39, further comprising:
    第一提醒事件生成模块,用于在记录备忘信息之后,生成针对所述备忘信息的提醒事件。The first reminder event generating module is configured to generate a reminder event for the memo information after the memo information is recorded.
  43. 根据权利要求40所述的交互装置,其特征在于,还包括:The interaction device of claim 40, further comprising:
    第一提醒事件获取模块,用于从所述服务器获取针对备忘信息的预设提醒事件。The first reminder event obtaining module is configured to obtain preset reminder events for the memo information from the server.
  44. 根据权利要求42或43所述的交互装置,其特征在于,还包括:The interaction device according to claim 42 or 43, further comprising:
    第一提醒事件触发模块,用于当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。The first reminder event triggering module is configured to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
  45. 根据权利要求39所述的交互装置,其特征在于,所述第一记录交互模块,用于从预设备忘信息中查找与所述语音识别结果匹配的信息;调用所述交互助手播放所述与所述语音识别结果匹配的信息。The interaction device according to claim 39, wherein the first recording interaction module is configured to search for information that matches the voice recognition result from pre-device forget information; call the interactive assistant to play the and Information that the voice recognition result matches.
  46. 根据权利要求39所述的交互装置,其特征在于,还包括:The interaction device of claim 39, further comprising:
    第一语义分析模块,用于获取由所述服务器对备忘信息进行语义分析得到的语义分析结果;The first semantic analysis module is configured to obtain a semantic analysis result obtained by the server performing semantic analysis on the memo information;
    第一标签生成模块,用于根据语义分析结果,对所述备忘信息生成标签信息。The first label generating module is configured to generate label information for the memo information according to the semantic analysis result.
  47. 根据权利要求46所述的交互装置,其特征在于,所述第一记录交互模块,用于当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,调用所述交互助手查找与目标标签信息匹配的预设备忘信息并播放。The interaction device according to claim 46, wherein the first recording interaction module is configured to call the interaction assistant to find memo information that characterizes the need to find target tag information when the voice recognition result The pre-device that matches the target tag information forgets the information and plays it.
  48. 根据权利要求33所述的交互装置,其特征在于,所述第一交互模块包括:The interaction device according to claim 33, wherein the first interaction module comprises:
    第一对话语句获取模块,用于从所述语音识别结果中获取对话语句;The first dialogue sentence obtaining module is used to obtain the dialogue sentence from the speech recognition result;
    第一对话交互模块,用于调用所述交互助手生成与所述对话语句匹配的答复语句并播放。The first dialogue interaction module is used to call the interactive assistant to generate and play a reply sentence matching the dialogue sentence.
  49. 根据权利要求48所述的交互装置,其特征在于,所述第一对话交互模块,用于获取用户方位信息;调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放。The interaction device according to claim 48, wherein the first dialog interaction module is configured to obtain user location information; call the interactive assistant to generate a voice message according to the user location information and the dialog sentence Navigate the reply sentence and play it.
  50. 根据权利要求49所述的交互装置,其特征在于,所述耳机具有方位传感器,所述第一对话交互模块,用于获取所述方位传感器检测的用户方位信息。The interaction device according to claim 49, wherein the headset has an orientation sensor, and the first dialogue interaction module is configured to obtain user orientation information detected by the orientation sensor.
  51. 根据权利要求49所述的交互装置,其特征在于,所述第一对话交互模块,用于获取用户地理位置信息;调用所述交互助手根据所述用户方位信息、所述对话语句和用户地理位置信息,生成用于语音导航的答复语句并播放。The interaction device according to claim 49, wherein the first dialog interaction module is configured to obtain user geographic location information; call the interactive assistant according to the user location information, the dialogue sentence, and the user geographic location Information, generate and play the reply sentence for voice navigation.
  52. 根据权利要求49所述的交互装置,其特征在于,所述第一对话交互模块,用于向所述服务器发送导航查询信息;所述导航查询信息包括所述用户方位信息和所述对话语句;接收所述服务器发送的用于语音导航的答复语句并播放,所述用于语音导航的答复语句由所述服务器根据所述用户方位信息和所述对话语句查询生成。The interaction device of claim 49, wherein the first dialog interaction module is configured to send navigation query information to the server; the navigation query information includes the user location information and the dialog sentence; The reply sentence for voice navigation sent by the server is received and played. The reply sentence for voice navigation is generated by the server according to the user location information and the dialog sentence query.
  53. 一种交互装置,其特征在于,应用于服务器,所述服务器与耳机通信连接,所述耳机具有交互助手,所述装置包括:An interactive device, characterized in that it is applied to a server, the server is in communication connection with a headset, the headset has an interactive assistant, and the device includes:
    语音识别模块,用于接收所述耳机发送的用户语音,并对用户语音进行识别,得到语音识别结果;The voice recognition module is used to receive the user voice sent by the headset and recognize the user voice to obtain a voice recognition result;
    语音识别结果发送模块,用于向所述耳机发送所述语音识别结果,所述耳机用于调用所述交互助手根据所述语音识别结果执行交互操作。The voice recognition result sending module is configured to send the voice recognition result to the earphone, and the earphone is used to call the interactive assistant to perform an interactive operation according to the voice recognition result.
  54. 根据权利要求53所述的交互装置,其特征在于,还包括:The interactive device of claim 53, further comprising:
    第一用户状态获取模块,用于获取所述耳机检测的用户状态;The first user status acquiring module is configured to acquire the user status detected by the headset;
    第一歌曲发送模块,用于查找与所述用户状态匹配的推荐歌曲,并向所述耳机发送;所述耳机用于调用所述交互助手向用户推荐所述推荐歌曲。The first song sending module is used to find a recommended song matching the user's status and send it to the earphone; the earphone is used to call the interactive assistant to recommend the recommended song to the user.
  55. 根据权利要求53所述的交互装置,其特征在于,还包括:The interactive device of claim 53, further comprising:
    第二用户状态获取模块,用于获取所述耳机检测的用户状态;The second user status acquiring module is configured to acquire the user status detected by the headset;
    音效确定模块,用于确定与所述用户状态匹配的音效;A sound effect determining module, configured to determine a sound effect matching the user status;
    第二歌曲发送模块,用于将预设歌曲调整为所述音效,并向所述耳机发送调整音效后的所述预设歌曲,所述耳机用于调用所述交互助手播放调整音效后的所述预设歌曲。The second song sending module is configured to adjust a preset song to the sound effect and send the preset song with the adjusted sound effect to the earphone. The earphone is used to call the interactive assistant to play all the adjusted sound effects. Describe the preset songs.
  56. 根据权利要求53所述的交互装置,其特征在于,还包括:The interactive device of claim 53, further comprising:
    记录信息处理模块,用于根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,所述耳机用于播放所述已记录的信息。The recorded information processing module is used for recognizing and recording information from the user’s voice according to the voice recognition result, or acquiring the recorded information according to the voice recognition result and sending it to the earphone, and the earphone is used for playing The recorded information.
  57. 根据权利要求56所述的交互装置,其特征在于,所述记录信息处理模块包括:The interactive device according to claim 56, wherein the record information processing module comprises:
    备忘信息处理模块,用于根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并向所述耳机发送。The memo information processing module is configured to recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain pre-device memo information according to the voice recognition result and send it to the headset.
  58. 根据权利要求56所述的交互装置,其特征在于,所述记录信息处理模块包括:The interactive device according to claim 56, wherein the record information processing module comprises:
    语音处理模块,用于根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并向所述耳机发送。The voice processing module is configured to recognize and record a target voice from the user voice according to the voice recognition result, or obtain a recorded target voice according to the voice recognition result and send it to the headset.
  59. 根据权利要求56所述的交互装置,其特征在于,还包括:The interactive device of claim 56, further comprising:
    第二记录信息传输模块,用于向所述耳机发送从所述用户语音识别的信息,和/或, 获取所述耳机已记录的信息并记录。The second record information transmission module is configured to send the information recognized from the user's voice to the earphone, and/or obtain and record the information recorded by the earphone.
  60. 根据权利要求57所述的交互装置,其特征在于,还包括:The interaction device of claim 57, further comprising:
    第二提醒事件生成模块,用于在记录备忘信息之后,生成针对所述备忘信息的提醒事件;The second reminder event generating module is used to generate a reminder event for the memo information after the memo information is recorded;
    提醒事件发送模块,用于向所述耳机所述提醒事件,所述耳机用于当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。The reminder event sending module is configured to send the reminder event to the earphone, and the earphone is used to call the interactive assistant to obtain the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met, and Play.
  61. 根据权利要求57所述的交互装置,其特征在于,还包括:The interaction device of claim 57, further comprising:
    第二语义分析模块,用于对备忘信息进行语义分析,得到语义分析结果;The second semantic analysis module is used to perform semantic analysis on the memo information to obtain the semantic analysis result;
    第二标签生成模块,用于根据语义分析结果,对所述备忘信息生成标签信息。The second label generating module is used to generate label information for the memo information according to the semantic analysis result.
  62. 根据权利要求61所述的交互装置,其特征在于,所述备忘信息处理模块,用于当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,查找与目标标签信息匹配的预设备忘信息并向所述耳机发送。The interaction device according to claim 61, wherein the memo information processing module is configured to find the memo information that matches the target tag information when the voice recognition result includes a requirement to find memo information with target tag information The pre-device forgot the information and sent it to the headset.
  63. 根据权利要求53所述的交互装置,其特征在于,还包括:The interactive device of claim 53, further comprising:
    对话语句获取模块,用于从所述语音识别结果中获取对话语句;A dialogue sentence obtaining module, which is used to obtain a dialogue sentence from the speech recognition result;
    答复语句发送模块,用于生成与所述对话语句匹配的答复语句并向所述耳机发送,所述耳机用于播放所述与所述对话语句匹配的答复语句。The reply sentence sending module is used for generating a reply sentence matching the dialogue sentence and sending it to the earphone, and the earphone is used for playing the reply sentence matching the dialogue sentence.
  64. 根据权利要求63所述的交互装置,其特征在于,所述答复语句发送模块,用于获取所述耳机检测的用户方位信息;根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并向所述耳机发送。The interaction device according to claim 63, wherein the reply sentence sending module is configured to obtain user location information detected by the headset; and generate a voice navigation system based on the user location information and the dialogue sentence. The reply sentence is sent to the headset.
  65. 一种耳机,其特征在于,包括有存储器,以及一个或者一个以上的程序,其中一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行所述一个或者一个以上程序包含用于进行以下操作的指令:A headset, characterized by comprising a memory and one or more programs, wherein one or more programs are stored in the memory and configured to be executed by one or more processors Contains instructions for the following operations:
    向服务器发送用户语音,并从所述服务器获取所述用户语音的语音识别结果;Sending a user voice to a server, and obtaining a voice recognition result of the user voice from the server;
    调用交互助手根据所述语音识别结果执行交互操作。The interactive assistant is invoked to perform an interactive operation according to the voice recognition result.
  66. 根据权利要求65所述的耳机,其特征在于,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:The headset according to claim 65, wherein said invoking said interactive assistant to perform an interactive operation according to said voice recognition result comprises:
    根据所述语音识别结果唤醒所述交互助手;Wake up the interactive assistant according to the voice recognition result;
    获取用户状态;Get user status;
    调用所述交互助手根据所述用户状态推荐歌曲或播放歌曲。Invoke the interactive assistant to recommend songs or play songs according to the user status.
  67. 根据权利要求66所述的耳机,其特征在于,所述耳机具有重力传感器,所述获取用户状态,包括:The earphone according to claim 66, wherein the earphone has a gravity sensor, and said acquiring the user status comprises:
    获取所述重力传感器检测的传感数据,根据所述传感数据确定用户状态。Acquire the sensing data detected by the gravity sensor, and determine the user state according to the sensing data.
  68. 根据权利要求66所述的耳机,其特征在于,所述调用所述交互助手根据所述用户状态推荐歌曲,包括:The headset according to claim 66, wherein said invoking said interactive assistant to recommend songs according to said user status comprises:
    向所述服务器发送所述用户状态;Sending the user status to the server;
    接收所述服务器发送的推荐歌曲并调用所述交互助手向用户推荐,所述推荐歌曲为由所述服务器查找的与所述用户状态匹配的歌曲。Receive the recommended song sent by the server and call the interactive assistant to recommend to the user, the recommended song is a song searched by the server that matches the user's status.
  69. 根据权利要求66所述的耳机,其特征在于,所述调用所述交互助手根据所述用户状态播放歌曲,包括:The headset according to claim 66, wherein said invoking said interactive assistant to play a song according to said user status comprises:
    向所述服务器发送所述用户状态;Sending the user status to the server;
    接收所述服务器发送的音效调整后的预设歌曲并调用所述交互助手播放;音效调整后的所述预设歌曲为由所述服务器确定与所述用户状态匹配的音效,并将预设歌曲调整为所述音效得到。Receive the preset song after adjusting the sound effect sent by the server and call the interactive assistant to play it; the preset song after the sound effect adjustment is the sound effect determined by the server to match the user status, and the preset song Adjusted to the sound effect.
  70. 根据权利要求65所述的耳机,其特征在于,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:The headset according to claim 65, wherein said invoking said interactive assistant to perform an interactive operation according to said voice recognition result comprises:
    调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放。Invoke the interactive assistant to recognize and record information from the user's voice according to the voice recognition result, or obtain and play the recorded information according to the voice recognition result.
  71. 根据权利要求70所述的耳机,其特征在于,所述调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放,包括:The headset according to claim 70, wherein the invoking the interactive assistant recognizes and records information from the user voice according to the voice recognition result, or obtains recorded information according to the voice recognition result And play, including:
    调用所述交互助手根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并播放。Invoke the interactive assistant to recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain and play the pre-device memo information according to the voice recognition result.
  72. 根据权利要求70所述的耳机,其特征在于,所述调用所述交互助手根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并播放,包括:The headset according to claim 70, wherein the invoking the interactive assistant recognizes and records information from the user voice according to the voice recognition result, or obtains recorded information according to the voice recognition result And play, including:
    调用所述交互助手根据所述语音识别结果从所述用户语音,识别目标语音并记录, 或根据所述语音识别结果获取已记录的目标语音并播放。Invoke the interactive assistant to recognize and record a target voice from the user's voice according to the voice recognition result, or obtain and play the recorded target voice according to the voice recognition result.
  73. 根据权利要求70所述的耳机,其特征在于,还包含用于进行以下操作的指令:The headset according to claim 70, further comprising instructions for performing the following operations:
    向所述服务器发送已记录的信息,和/或,获取所述服务器已记录的信息并记录。Send the recorded information to the server, and/or obtain and record the recorded information of the server.
  74. 根据权利要求71所述的耳机,其特征在于,还包含用于进行以下操作的指令:The headset according to claim 71, further comprising instructions for performing the following operations:
    在记录备忘信息之后,生成针对所述备忘信息的提醒事件。After the memo information is recorded, a reminder event for the memo information is generated.
  75. 根据权利要求72所述的耳机,其特征在于,还包含用于进行以下操作的指令:The headset according to claim 72, further comprising instructions for performing the following operations:
    从所述服务器获取针对备忘信息的预设提醒事件。Obtain a preset reminder event for the memo information from the server.
  76. 根据权利要求74或75所述的耳机,其特征在于,还包含用于进行以下操作的指令:The headset according to claim 74 or 75, further comprising instructions for performing the following operations:
    当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。When the trigger condition of the preset reminder event is met, the interactive assistant is called to obtain and play the memo information corresponding to the preset reminder event.
  77. 根据权利要求71所述的耳机,其特征在于,所述根据所述语音识别结果获取预设备忘信息并播放,包括:The headset according to claim 71, wherein the acquiring and playing the pre-device forget information according to the voice recognition result comprises:
    从预设备忘信息中查找与所述语音识别结果匹配的信息;Searching for information that matches the voice recognition result from the pre-device forget information;
    调用所述交互助手播放所述与所述语音识别结果匹配的信息。Invoke the interactive assistant to play the information matching the voice recognition result.
  78. 根据权利要求71所述的耳机,其特征在于,还包含用于进行以下操作的指令:The headset according to claim 71, further comprising instructions for performing the following operations:
    获取由所述服务器对备忘信息进行语义分析得到的语义分析结果;Obtaining a semantic analysis result obtained by the server performing semantic analysis on the memo information;
    根据语义分析结果,对所述备忘信息生成标签信息。According to the semantic analysis result, label information is generated for the memo information.
  79. 根据权利要求78所述的耳机,其特征在于,所述根据所述语音识别结果获取预设备忘信息并播放,包括:The headset according to claim 78, wherein the acquiring and playing the pre-device forget information according to the voice recognition result comprises:
    当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,调用所述交互助手查找与目标标签信息匹配的预设备忘信息并播放。When the voice recognition result includes the memo information that characterizes the need to find target tag information, the interactive assistant is called to find and play the pre-device forget information that matches the target tag information.
  80. 根据权利要求65所述的耳机,其特征在于,所述调用所述交互助手根据所述语音识别结果执行交互操作,包括:The headset according to claim 65, wherein said invoking said interactive assistant to perform an interactive operation according to said voice recognition result comprises:
    从所述语音识别结果中获取对话语句;Obtaining a dialogue sentence from the speech recognition result;
    调用所述交互助手生成与所述对话语句匹配的答复语句并播放。Invoke the interactive assistant to generate a reply sentence matching the dialogue sentence and play it.
  81. 根据权利要求80所述的耳机,其特征在于,所述调用所述交互助手,生成与所述对话语句匹配的答复语句并播放,包括:The headset according to claim 80, wherein the invoking the interactive assistant to generate and play a reply sentence matching the dialogue sentence comprises:
    获取用户方位信息;Obtain user location information;
    调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放。The interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information and the dialogue sentence.
  82. 根据权利要求81所述的耳机,其特征在于,所述耳机具有方位传感器,所述获取用户方位信息,包括:The headset according to claim 81, wherein the headset has an orientation sensor, and said acquiring user orientation information comprises:
    获取所述方位传感器检测的用户方位信息。Obtain user position information detected by the position sensor.
  83. 根据权利要求81所述的耳机,其特征在于,所述调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放,包括:The headset according to claim 81, wherein said invoking said interactive assistant to generate and play a reply sentence for voice navigation according to said user location information and said dialogue sentence, comprising:
    获取用户地理位置信息;Obtain user geographic location information;
    调用所述交互助手根据所述用户方位信息、所述对话语句和用户地理位置信息,生成用于语音导航的答复语句并播放。The interactive assistant is called to generate and play a reply sentence for voice navigation according to the user location information, the dialogue sentence and the user's geographic location information.
  84. 根据权利要求81所述的耳机,其特征在于,所述调用所述交互助手根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并播放,包括:The headset according to claim 81, wherein said invoking said interactive assistant to generate and play a reply sentence for voice navigation according to said user location information and said dialogue sentence, comprising:
    向所述服务器发送导航查询信息;所述导航查询信息包括所述用户方位信息和所述对话语句;Sending navigation query information to the server; the navigation query information includes the user location information and the dialogue sentence;
    接收所述服务器发送的用于语音导航的答复语句并播放,所述用于语音导航的答复语句由所述服务器根据所述用户方位信息和所述对话语句查询生成。The reply sentence for voice navigation sent by the server is received and played. The reply sentence for voice navigation is generated by the server according to the user location information and the dialog sentence query.
  85. 一种服务器,其特征在于,包括有存储器,以及一个或者一个以上的程序,其中一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行所述一个或者一个以上程序包含用于进行以下操作的指令:A server, characterized by comprising a memory and one or more programs, wherein one or more programs are stored in the memory and configured to be executed by one or more processors Contains instructions for the following operations:
    接收耳机发送的用户语音,并对用户语音进行识别,得到语音识别结果;Receive the user's voice sent by the headset, and recognize the user's voice to obtain the voice recognition result;
    向所述耳机发送所述语音识别结果,所述耳机用于调用所述交互助手根据所述语音识别结果执行交互操作。The voice recognition result is sent to the earphone, and the earphone is used to call the interactive assistant to perform an interactive operation according to the voice recognition result.
  86. 根据权利要求85所述的服务器,其特征在于,还包含用于进行以下操作的指令:The server of claim 85, further comprising instructions for performing the following operations:
    获取所述耳机检测的用户状态;Acquiring the user status detected by the headset;
    查找与所述用户状态匹配的推荐歌曲,并向所述耳机发送;所述耳机用于调用所述交互助手向用户推荐所述推荐歌曲。Find a recommended song matching the user's state and send it to the headset; the headset is used to call the interactive assistant to recommend the recommended song to the user.
  87. 根据权利要求85所述的服务器,其特征在于,还包含用于进行以下操作的指令:The server of claim 85, further comprising instructions for performing the following operations:
    获取所述耳机检测的用户状态;Acquiring the user status detected by the headset;
    确定与所述用户状态匹配的音效;Determine a sound effect matching the user status;
    将预设歌曲调整为所述音效,并向所述耳机发送调整音效后的所述预设歌曲,所述耳机用于调用所述交互助手播放调整音效后的所述预设歌曲。The preset song is adjusted to the sound effect, and the preset song with the adjusted sound effect is sent to the earphone, and the earphone is used for invoking the interactive assistant to play the preset song with the adjusted sound effect.
  88. 根据权利要求85所述的服务器,其特征在于,还包含用于进行以下操作的指令:The server of claim 85, further comprising instructions for performing the following operations:
    根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,所述耳机用于播放所述已记录的信息。Recognize and record information from the user's voice according to the voice recognition result, or obtain and send the recorded information to the earphone according to the voice recognition result, and the earphone is used to play the recorded information.
  89. 根据权利要求88所述的服务器,其特征在于,所述根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,包括:The server according to claim 88, wherein the information is recognized and recorded from the user's voice according to the voice recognition result, or the recorded information is obtained according to the voice recognition result and sent to the headset Send, including:
    根据所述语音识别结果从所述语音识别结果中识别备忘信息并记录,或根据所述语音识别结果获取预设备忘信息并向所述耳机发送。Recognize and record memo information from the voice recognition result according to the voice recognition result, or obtain pre-device memo information according to the voice recognition result and send it to the headset.
  90. 根据权利要求88所述的服务器,其特征在于,所述根据所述语音识别结果从所述用户语音中识别信息并记录,或,根据所述语音识别结果获取已记录的信息并向所述耳机发送,包括:The server according to claim 88, wherein the information is recognized and recorded from the user's voice according to the voice recognition result, or the recorded information is obtained according to the voice recognition result and sent to the headset Send, including:
    根据所述语音识别结果从所述用户语音,识别目标语音并记录,或根据所述语音识别结果获取已记录的目标语音并向所述耳机发送。According to the voice recognition result, a target voice is recognized and recorded from the user voice, or a recorded target voice is acquired and sent to the headset according to the voice recognition result.
  91. 根据权利要求88所述的服务器,其特征在于,还包含用于进行以下操作的指令:The server according to claim 88, further comprising instructions for performing the following operations:
    向所述耳机发送从所述用户语音识别的信息,和/或,获取所述耳机已记录的信息并记录。Send the information recognized from the user's voice to the earphone, and/or obtain and record the information recorded by the earphone.
  92. 根据权利要求89所述的服务器,其特征在于,还包含用于进行以下操作的指令:The server according to claim 89, further comprising instructions for performing the following operations:
    在记录备忘信息之后,生成针对所述备忘信息的提醒事件;After the memo information is recorded, a reminder event for the memo information is generated;
    向所述耳机所述提醒事件,所述耳机用于当满足预设提醒事件的触发条件时,调用所述交互助手获取所述预设提醒事件相应的备忘信息并播放。For the reminder event to the earphone, the earphone is used to call the interactive assistant to obtain and play the memo information corresponding to the preset reminder event when the trigger condition of the preset reminder event is met.
  93. 根据权利要求89所述的服务器,其特征在于,还包含用于进行以下操作的指令:The server according to claim 89, further comprising instructions for performing the following operations:
    对备忘信息进行语义分析,得到语义分析结果;Perform semantic analysis on the memo information to obtain the semantic analysis result;
    根据语义分析结果,对所述备忘信息生成标签信息。According to the semantic analysis result, label information is generated for the memo information.
  94. 根据权利要求93所述的服务器,其特征在于,所述根据所述语音识别结果获取预设备忘信息并向所述耳机发送,包括:The server according to claim 93, wherein the acquiring pre-device forget information according to the voice recognition result and sending it to the headset comprises:
    当所述语音识别结果包括表征需求查找具有目标标签信息的备忘信息时,查找与目标标签信息匹配的预设备忘信息并向所述耳机发送。When the voice recognition result includes memo information that characterizes the need to find target tag information, search for pre-device forget information that matches the target tag information and send it to the headset.
  95. 根据权利要求85所述的服务器,其特征在于,还包含用于进行以下操作的指令:The server of claim 85, further comprising instructions for performing the following operations:
    从所述语音识别结果中获取对话语句;Obtaining a dialogue sentence from the speech recognition result;
    生成与所述对话语句匹配的答复语句并向所述耳机发送,所述耳机用于播放所述与所述对话语句匹配的答复语句。A reply sentence matching the dialogue sentence is generated and sent to the earphone, and the earphone is used to play the reply sentence matching the dialogue sentence.
  96. 根据权利要求95所述的服务器,其特征在于,所述生成与所述对话语句匹配的答复语句并向所述耳机发送,包括:The server according to claim 95, wherein the generating and sending a reply sentence matching the conversation sentence to the headset comprises:
    获取所述耳机检测的用户方位信息;Acquiring user location information detected by the headset;
    根据所述用户方位信息和所述对话语句,生成用于语音导航的答复语句并向所述耳机发送。According to the user location information and the dialogue sentence, a reply sentence for voice navigation is generated and sent to the headset.
  97. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至20中任一项所述的交互方法的步骤。A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the interactive method according to any one of claims 1 to 20 is implemented step.
  98. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求21至32中任一项所述的交互方法的步骤。A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the interactive method according to any one of claims 21 to 32 is implemented. step.
  99. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算处理设备上运行时,导致所述计算处理设备执行根据权利要求1-20中的任一所述的交互方法。A computer program comprising computer readable code, which when the computer readable code runs on a computing processing device, causes the computing processing device to execute the interaction method according to any one of claims 1-20.
  100. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算处理设备上运行时,导致所述计算处理设备执行根据权利要求21-32中的任一所述的交互方法。A computer program comprising computer readable code, which when the computer readable code runs on a computing processing device, causes the computing processing device to execute the interaction method according to any one of claims 21-32.
PCT/CN2021/074916 2020-06-05 2021-02-02 Interaction method and device, earphone, and server WO2021244059A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010507540.4 2020-06-05
CN202010507540.4A CN111739529A (en) 2020-06-05 2020-06-05 Interaction method and device, earphone and server

Publications (1)

Publication Number Publication Date
WO2021244059A1 true WO2021244059A1 (en) 2021-12-09

Family

ID=72648376

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/074916 WO2021244059A1 (en) 2020-06-05 2021-02-02 Interaction method and device, earphone, and server

Country Status (2)

Country Link
CN (1) CN111739529A (en)
WO (1) WO2021244059A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739529A (en) * 2020-06-05 2020-10-02 北京搜狗科技发展有限公司 Interaction method and device, earphone and server
CN112749349A (en) * 2020-12-31 2021-05-04 北京搜狗科技发展有限公司 Interaction method and earphone equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107315561A (en) * 2017-06-30 2017-11-03 联想(北京)有限公司 A kind of data processing method and electronic equipment
CN108280067A (en) * 2018-02-26 2018-07-13 深圳市百泰实业股份有限公司 Earphone interpretation method and system
CN108900945A (en) * 2018-09-29 2018-11-27 上海与德科技有限公司 Bluetooth headset box and audio recognition method, server and storage medium
CN109785837A (en) * 2019-01-28 2019-05-21 上海与德通讯技术有限公司 Sound control method, device, TWS bluetooth headset and storage medium
CN111739529A (en) * 2020-06-05 2020-10-02 北京搜狗科技发展有限公司 Interaction method and device, earphone and server

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006251376A (en) * 2005-03-10 2006-09-21 Yamaha Corp Musical sound controller
US8320578B2 (en) * 2008-04-30 2012-11-27 Dp Technologies, Inc. Headset
CN103714836A (en) * 2012-09-29 2014-04-09 联想(北京)有限公司 Method for playing audio information and electronic equipment
CN104535074A (en) * 2014-12-05 2015-04-22 惠州Tcl移动通信有限公司 Bluetooth earphone-based voice navigation method, system and terminal
CN105263075B (en) * 2015-10-12 2018-12-25 深圳东方酷音信息技术有限公司 A kind of band aspect sensor earphone and its 3D sound field restoring method
US20170195795A1 (en) * 2015-12-30 2017-07-06 Cyber Group USA Inc. Intelligent 3d earphone
CN107515007A (en) * 2016-06-16 2017-12-26 北京小米移动软件有限公司 Air navigation aid and device
CN206490796U (en) * 2016-08-16 2017-09-12 北京金锐德路科技有限公司 Acoustic control intelligent earphone
CN107478239A (en) * 2017-08-15 2017-12-15 上海摩软通讯技术有限公司 Air navigation aid, navigation system and audio reproducing apparatus based on audio reproducing apparatus
CN107569217A (en) * 2017-08-29 2018-01-12 上海展扬通信技术有限公司 A kind of control method of intelligent earphone and the intelligent earphone
CN110139178A (en) * 2018-02-02 2019-08-16 中兴通讯股份有限公司 A kind of method, apparatus, equipment and the storage medium of determining terminal moving direction
CN108710486B (en) * 2018-05-28 2021-07-09 Oppo广东移动通信有限公司 Audio playing method and device, earphone and computer readable storage medium
CN108958846A (en) * 2018-09-27 2018-12-07 出门问问信息科技有限公司 A kind of creation method and device of notepad item
CN115240664A (en) * 2019-04-10 2022-10-25 华为技术有限公司 Man-machine interaction method and electronic equipment
CN111010641A (en) * 2019-12-20 2020-04-14 联想(北京)有限公司 Information processing method, earphone and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107315561A (en) * 2017-06-30 2017-11-03 联想(北京)有限公司 A kind of data processing method and electronic equipment
CN108280067A (en) * 2018-02-26 2018-07-13 深圳市百泰实业股份有限公司 Earphone interpretation method and system
CN108900945A (en) * 2018-09-29 2018-11-27 上海与德科技有限公司 Bluetooth headset box and audio recognition method, server and storage medium
CN109785837A (en) * 2019-01-28 2019-05-21 上海与德通讯技术有限公司 Sound control method, device, TWS bluetooth headset and storage medium
CN111739529A (en) * 2020-06-05 2020-10-02 北京搜狗科技发展有限公司 Interaction method and device, earphone and server

Also Published As

Publication number Publication date
CN111739529A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
WO2021244057A1 (en) Interaction method and apparatus, earphone, and earphone accommodation apparatus
JP7101322B2 (en) Voice trigger for digital assistant
JP2019117623A (en) Voice dialogue method, apparatus, device and storage medium
US20200142667A1 (en) Spatialized virtual personal assistant
WO2021244059A1 (en) Interaction method and device, earphone, and server
CN109643548A (en) System and method for content to be routed to associated output equipment
WO2021031308A1 (en) Audio processing method and device, and storage medium
CN108460138A (en) Music recommends method, apparatus, equipment and storage medium
CN110415703A (en) Voice memos information processing method and device
CN111739528A (en) Interaction method and device and earphone
US11929081B2 (en) Electronic apparatus and controlling method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21816801

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.03.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21816801

Country of ref document: EP

Kind code of ref document: A1