WO2019218369A1 - 一种便携式智能语音交互控制设备、方法及系统 - Google Patents

一种便携式智能语音交互控制设备、方法及系统 Download PDF

Info

Publication number
WO2019218369A1
WO2019218369A1 PCT/CN2018/087576 CN2018087576W WO2019218369A1 WO 2019218369 A1 WO2019218369 A1 WO 2019218369A1 CN 2018087576 W CN2018087576 W CN 2018087576W WO 2019218369 A1 WO2019218369 A1 WO 2019218369A1
Authority
WO
WIPO (PCT)
Prior art keywords
earphone
communication module
network data
voice
processor
Prior art date
Application number
PCT/CN2018/087576
Other languages
English (en)
French (fr)
Inventor
邓超
Original Assignee
深圳傲智天下信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳傲智天下信息科技有限公司 filed Critical 深圳傲智天下信息科技有限公司
Priority to EP18919359.2A priority Critical patent/EP3621068A4/en
Priority to PCT/CN2018/087576 priority patent/WO2019218369A1/zh
Priority to CN201820960500.3U priority patent/CN208507180U/zh
Priority to CN201810643457.2A priority patent/CN108550367A/zh
Publication of WO2019218369A1 publication Critical patent/WO2019218369A1/zh
Priority to US16/708,639 priority patent/US10809964B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • H04R1/028Casings; Cabinets ; Supports therefor; Mountings therein associated with devices performing functions other than acoustics, e.g. electric candles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1016Earpieces of the intra-aural type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1025Accumulators or arrangements for charging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/007Two-channel systems in which the audio signals are in digital form
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/02Terminal devices
    • H04W88/06Terminal devices adapted for operation in multiple networks or having at least two operational modes, e.g. multi-mode terminals
    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45CPURSES; LUGGAGE; HAND CARRIED BAGS
    • A45C11/00Receptacles for purposes not provided for in groups A45C1/00-A45C9/00
    • A45C2011/001Receptacles for purposes not provided for in groups A45C1/00-A45C9/00 for portable audio devices, e.g. headphones or MP3-players
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/17Hearing device specific tools used for storing or handling hearing devices or parts thereof, e.g. placement in the ear, replacement of cerumen barriers, repair, cleaning hearing devices

Definitions

  • the present invention relates to the field of intelligent voice device technologies, and in particular, to a portable intelligent voice interaction control device, method and system.
  • Echo (Echo Protocol), Echo is the most commonly used data packet in the network. You can know the path of the current connection node by sending the echo packet, and the path length can be obtained by the round trip time.
  • Amazon's Echo Dot can listen to, listen to, connect to and control other devices, Echo Dot, which is essentially Echo's "Mini-Me", a cloud-based, voice-activated AI voice interaction device that understands For a Siri with physical and ready-to-call, the user wakes up the Echo Dot function by saying her name or other keywords. Echo Dot's built-in microphone array is always ready to listen to sound from all around. Once the keywords are identified, the microphones will record any subsequent words and send them to Amazon's cloud server, which will recognize the voice commands and return instructions to tell Alexa how to respond.
  • the present application provides a portable intelligent voice interaction control device, method and system, which have no special requirements on the distance between the user and the Echo Dot device, and can also perform fast voice recognition in playing music or in a noisy environment, and the user experience is good.
  • the present application provides a portable intelligent voice interactive control device including a body and an earphone detachably coupled to the body, the body including a body casing and a rotating cover, the rotating cover being fastened to the body casing
  • the surface of the main body casing is embedded with a earphone slot for placing the earphone, and the earphone slot is provided with a power output end, and the earphone is provided with a power input end, and the earphone can be placed in the earphone slot, through the power output end and the
  • the power input end is electrically connected to the body for charging, and the rotating cover is provided with a receiving hole, and the rotating cover is horizontally rotated to expose the earphone from the receiving hole for the earphone to be taken out; the earphone is taken out from the earphone slot After that, the communication is connected to the main body through wireless communication;
  • the body casing comprises: a body upper shell, a sound-permeable shell and a body lower shell, the sound-transmitting shell is disposed between the body upper shell and the body lower shell, and the sound-permeable shell is provided with a mesh sound-permeable hole;
  • a steeder lamp ring is further disposed at a joint of the sound permeable casing and the upper casing of the body, and a light transmissive decorative band is disposed outside the steed horn.
  • a microphone array, a first communication module, a second communication module, a first speaker, and a body processor are disposed in the body casing, and the body processor is respectively connected to the microphone array, the first communication module, and the second
  • the communication module and the first speaker are electrically connected
  • the second communication module comprises a mobile data internet module, a cellular transceiver and a WiFi transceiver
  • the earphone is in-ear type, and the earphone comprises an ear soft rubber and a earphone disposed at the end a housing, a headphone housing, a headphone processor, a second speaker and a pickup, wherein the headphone processor is electrically connected to the headphone communication module, the second speaker and the pickup, and the headphone communication module is wirelessly connected to the first communication module.
  • the headset communication module and the first communication module can be wirelessly connected through a WiFi, Bluetooth or infrared wireless connection; the headset includes: a TWS headset, a classic stereo Bluetooth headset or a classic single-sided Bluetooth headset.
  • a body energy storage circuit electrically connected to the body processor is further disposed in the body casing, and the body energy storage circuit is further connected with a charging module and a power output circuit, wherein the power output circuit is further connected.
  • the charging module includes a wireless charging module or a USB interface charging module; the earphone housing is further provided with a headphone energy storage circuit connected to the earphone processor, and the earphone energy storage circuit further A power input is connected.
  • the body is further provided with a body touch key and a body LED connected to the body processor
  • the earphone is further provided with a headphone touch key and an earphone LED connected to the earphone processor.
  • the second communication module further includes: an e-SIM card module.
  • the present application provides a portable intelligent voice interaction system, comprising: the portable intelligent voice interaction control device according to the first aspect, and a cloud server, wherein the portable intelligent voice interaction control device is in communication with the cloud server .
  • the present application provides a portable intelligent voice interaction method, the method comprising:
  • the earphone pickup device picks up the user voice, and the picked user voice performs analog-to-digital conversion through the earphone processor, and the digital voice signal obtained after the analog-to-digital conversion is sent to the first communication module through the earphone communication module;
  • the first communication module After receiving the digital voice signal, the first communication module sends the digital voice signal to the cloud server through the second communication module in response to the digital voice signal, and logs in to the interface of the corresponding cloud server to the digital voice signal. Perform speech recognition and semantic analysis;
  • the cloud server calls the corresponding network data, and sends the network data to the second communication module.
  • the body processor responds to the network data and forwards to the earphone communication module through the second communication module, and the earphone
  • the earphone processor responds to the network data, and performs corresponding voice broadcast according to the network data through the second speaker; or
  • the microphone array of the body picks up the user voice, and the picked user voice is analog-digital converted by the body processor;
  • the digital voice signal obtained after the analog-to-digital conversion sends the digital voice signal to the cloud server through the second communication module, and logs into the interface of the corresponding cloud server, and performs voice recognition and semantic analysis on the digital voice signal;
  • the cloud server calls the corresponding network data, and sends the network data to the second communication module.
  • the body processor responds to the network data, and performs corresponding voice broadcast according to the network data through the first speaker. ;or
  • the earphone pickup device picks up the user voice, and the picked user voice performs analog-to-digital conversion through the earphone processor, and the digital voice signal obtained after the analog-to-digital conversion is sent to the first communication module through the earphone communication module;
  • the first communication module After receiving the digital voice signal, the first communication module sends the digital voice signal to the cloud server through the second communication module in response to the digital voice signal, and logs in to the interface of the corresponding cloud server to the digital voice signal. Perform speech recognition and semantic analysis;
  • the cloud server calls the corresponding network data, and sends the network data to the second communication module.
  • the body processor responds to the network data, and performs corresponding voice broadcast according to the network data through the first speaker. .
  • the present application provides a computer readable storage medium comprising a program executable by a processor to implement the method of the third aspect.
  • the portable intelligent voice interactive device of the present application can not only pick up the user voice through the pickup of the earphone, but also pick up the user voice through the microphone array, the voice interaction between the person and the device is more convenient and flexible;
  • the wireless communication connection of the main body has no special requirements on the distance between the user and the Echo Dot device.
  • the earphones worn on the ear and the earphones are better than the Echo Dot device placed separately, the music can be played quickly or in a noisy environment.
  • the voice recognition has better user experience; the portable intelligent voice interaction device of the present application has the functions of an ordinary Echo Dot device, and in addition, the portable intelligent voice interaction device of the present application can also implement the functions of making a call, sending a short message, and accessing the Internet.
  • the function is more comprehensive and comprehensive, which can meet people's daily needs, and can replace the function of the mobile phone to a certain extent, reduce people's eye time and protect eyesight.
  • FIG. 1 is a schematic diagram of a portable intelligent voice interactive control device and system provided by the present application.
  • FIG. 2 is a structural block diagram of a portable intelligent voice interactive control device and system provided by an embodiment
  • FIG. 3 is a perspective view of a portable intelligent voice interactive control device provided by an embodiment
  • FIG. 4 is a schematic diagram of operation of taking a headset by a portable intelligent voice interactive control device according to an embodiment
  • FIG. 5 is a side perspective view of a portable intelligent voice interactive control device according to an embodiment
  • FIG. 6 is a side perspective view 2 of a portable intelligent voice interactive control device provided by an embodiment
  • FIG. 7 is an exploded view of a portable intelligent voice interactive control device according to an embodiment
  • Figure 8 is a perspective view of an earphone provided by an embodiment
  • FIG. 9 is a schematic diagram of a data interaction process of a portable intelligent voice interaction system according to an embodiment
  • FIG. 10 is a schematic diagram of a data interaction process of a portable intelligent voice interaction system according to another embodiment.
  • FIG. 11 is a schematic diagram of a data interaction process of a portable intelligent voice interaction system in a standby mode according to another embodiment.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • the present application provides a portable intelligent voice interaction device, including a body 10 and an earphone 20 detachably coupled to the body.
  • the body 10 includes a body casing 101 and a rotating cover 109 (circular as shown in FIG. 4).
  • the rotating cover 109 is fastened on the body casing 101.
  • the surface of the body casing 101 is embedded with an earphone slot 112 for placing the earphone.
  • the power output terminal 15 is disposed in the earphone 20, and the power input terminal 25 is disposed on the earphone 20.
  • the earphone 20 can be placed in the earphone slot 112, and electrically connected to the body 10 through the power output terminal 15 and the power input terminal 25 for charging.
  • a receiving hole is disposed on the 109, and rotating the rotating cover 109 horizontally can expose the earphone 20 from the receiving hole to facilitate the removal of the earphone 20 (as shown in FIG. 4); after the earphone 20 is taken out from the earphone slot 112, the body 10 is wirelessly communicated with the body 10. Communication connection.
  • the main body casing 101 includes: a main body upper casing 1011 , a sound transmitting shell 1012 body and a main body lower shell 1013 .
  • the sound transmitting housing 1012 is disposed between the main body upper shell 1011 and the main body lower shell 1013 .
  • the sound-permeable housing 1012 is provided with a mesh-shaped sound-transparent hole.
  • the sound-transmitting shell 1012 is further provided with a marquee ring 113 at the junction with the upper shell 1011.
  • the light-transmitting decorative strip 108 is disposed outside the marquee ring 113.
  • a microphone array 17, a first communication module 11, a second communication module 12, a first speaker 118, and a body processor 19 are disposed in the body casing 101, and the body processor 19 and the microphone array 17 are respectively
  • the communication module 11, the second communication module 12, and the first speaker 118 are electrically connected
  • the second communication module 12 includes a mobile data network module, a cellular transceiver, and a WiFi transceiver.
  • the microphone array 17 is configured to pick up a voice signal spoken by the user, and the voice signal is processed by the body processor 19 (analog-to-digital conversion), and then converted into a corresponding digital voice signal and sent out through the second communication module 21.
  • the earphone 20 is an in-ear type.
  • the earphone 20 includes an in-ear soft gel 202 and an earphone casing 201 disposed at the end.
  • the earphone casing 201 is provided with a headphone communication module 21 and a headphone processor 29.
  • the speaker 28 and the pickup 27 are electrically connected to the earphone communication module 21, the speaker 28 and the pickup 27, respectively, and the earphone communication module 21 is wirelessly connected to the first communication module 11.
  • the pickup 27 is used to pick up a voice signal spoken by the user, and the voice signal is processed by the headphone processor 29 (analog-to-digital conversion), and then converted into a corresponding digital voice signal and sent out through the headphone communication module 21.
  • the portable intelligent voice interactive device of the present application has two usage modes when used: a split use mode and a combined use mode.
  • the body processor 19 acquires the digital voice signal sent by the earphone communication module 21 through the first wireless communication module 11, and responds to the digital voice signal and accesses the Internet through the mobile data.
  • the module (3G/4G/5G) or the WiFi transceiver communicates with the cloud server 8, sends the digital voice signal to the cloud server 8 and accesses the cloud server 8, and performs voice recognition and voice analysis on the digital voice signal.
  • the result is related to making a call and sending a short message, and sending the analysis result back to the portable intelligent voice interaction, and then using the existing technology to realize the function of making a call and sending a short message through the cellular transceiver; otherwise, the corresponding Internet data processing is performed by the server.
  • the body processor 19 When the headset 20 is placed in the earphone slot 112 for charging, the body processor 19 directly picks up the voice signal spoken by the user through the microphone array 17, and converts it into a digital voice signal through the mobile data network module ( The 3G/4G/5G) or WiFi transceiver communicates with the cloud server 8, transmits the digital voice signal to the cloud server 8 and accesses the cloud server 8.
  • the server 8 performs voice recognition and voice analysis on the digital voice signal. If the result of the analysis is related to making a call or sending a short message, the analysis result is sent back to the portable intelligent voice interaction device, so that the portable intelligent voice interaction device utilizes the current through the cellular transceiver.
  • the server performs corresponding Internet data processing, calls corresponding network data, and sends the network data back to the wristband-type AI voice interaction device, and the wristband-type AI voice interaction device receives After the network data, the headset performs corresponding voice broadcast according to the network data.
  • the cloud server 8 can launch a program to perform an Internet search (such as Baidu search) and send the search result back to the portable intelligent voice interactive device and broadcast it in voice, or launch a program to call the network music player resource for music playback, or start up.
  • Road navigation applications such as Gaud ⁇ maps) navigate, or launch programs to order audio programs and more.
  • the data interaction process is specifically: the processor 19 sends the digital voice signal to the cloud server 8 through the second wireless communication module 12, logs in to the interface of the corresponding cloud server 8, performs voice recognition and semantic analysis on the digital voice signal;
  • the cloud server 8 calls the corresponding network data according to the corresponding result, and sends the network data to the portable intelligent voice interaction device, and the portable intelligent voice interaction device receives the network data, and performs corresponding voice broadcast according to the network data.
  • the split voice usage mode and the combined use mode are not the same when the corresponding voice broadcast is performed according to the network data.
  • the body processor 19 responds to the network data, and forwards to the earphone communication module 21 through the second communication module 12, and the earphone communication module 21 receives the data.
  • the headphone processor 29 responds to the network data, and performs corresponding voice broadcasts according to the network data through the second speaker 28.
  • the body processor 19 responds to the network data, and performs corresponding voice broadcast according to the network data through the first speaker 118.
  • a body energy storage circuit 13 electrically connected to the body processor 19 and a charging module 138 electrically connected to the body energy storage circuit 13 , a power output circuit 14 , and a power output circuit are further disposed in the body casing 101 .
  • the charging module 138 and the power output terminal 15 are also connected to the body 14.
  • the body energy storage circuit 13 is also connected to the body battery 130.
  • the microphone array 17, the first communication module 11, the second communication module 12, the first speaker 118, the body processor 19, the body energy storage circuit 13, the power output circuit 14, the power output terminal 15, and the charging module 138 constitute The basic composition of the body 10 is. As shown in FIG. 2 and FIG. 8 , in some embodiments, the microphone array 17 , the first communication module 11 , the second communication module 12 , the body processor 19 , the body energy storage circuit 13 , and the power output circuit 14 are disposed on the body PCB 100 . on.
  • the earphone housing 201 is further provided with an earphone energy storage circuit 23 electrically connected to the earphone processor 19, and a power input terminal 25 electrically connected to the earphone energy storage circuit 23, and the earphone storage circuit. 23 is also connected to the earphone battery 230.
  • the power input end 25 of the earphone is matched with the power output end 15 of the body, and may include, but is not limited to, a metal contact, a metal thin face or a metal male and female plug connector, and the metal contact form is shown in FIG.
  • the headphone communication module 21, the headphone processor 29, the second speaker 28, the pickup 27, the headphone energy storage circuit 23, and the power input terminal 25 constitute a basic composition of the earphone 20.
  • the earphone communication module 21, the earphone processor 29, the pickup 27, and the earphone energy storage circuit 23 are disposed on the earphone PCB 200.
  • the charging module 138 includes an existing wireless charging module or a conventional USB interface charging module.
  • the body 10 When the earphone 10 is charged, when the earphone 10 is placed in the earphone slot, if the charging module 138 is not connected to the external power source, the body 10 transmits power to the power input terminal 25 through the body battery 130, the power output circuit 14 and the power output terminal 15, so that The earphone 20 is charged; if the charging module 138 is connected to the external power source, the body 10 preferentially uses the power output circuit 14 and the power output terminal 15 to transmit power to the power input terminal 25, and after the charging of the earphone 20 is completed, the body is stored again.
  • the energy circuit 13 charges the body battery 130.
  • the portable intelligent voice interactive device can pick up the user voice through the pickup 27 of the earphone 20, and can also pick up the user voice through the microphone array 17, thereby realizing the voice interaction between the person and the device. It is more convenient and flexible. Among them, since the earphone 20 is wirelessly connected with the body 10, there is no special requirement for the distance between the user and the Echo Dot device (only in an atmosphere where wireless communication is possible), and because it is worn on the ear. The earphones are much better than the Echo Dot devices placed separately. They can also perform fast speech recognition while playing music or in noisy environments. The user experience is better. The portable intelligent voice interaction device of the present application can also pass through the microphone array of the body.
  • the network data resources are sent back to the portable intelligent voice interaction device by means of mobile data communication (3G/4G/5G) or WiFi communication.
  • the portable intelligent voice interactive device does not need a processor with powerful computing power, and does not need an expensive display screen, but only needs better communication capability, and can save the hardware cost of the portable intelligent voice interactive device compared with the mobile phone. And the use is more convenient and intelligent, which can make people leave their dependence on the mobile phone.
  • the earphone 20 placed in the earphone slot 112 is detachably connected to the body 10 by magnetic attraction, snapping or snapping, and magnetic attraction is shown in FIGS. 5 to 8 .
  • the headphone magnetic device 205 and the body magnetizing device 105 are attracted to each other.
  • a position sensor electrically connected to the body processor 19 is further disposed in the body casing 101.
  • the position sensor is configured to detect whether the position of the earphone 20 in the earphone slot 112 is accurate, and avoid the power input end 25 and the body of the earphone.
  • the power output terminal 15 is "virtually connected", which affects the charging effect of the earphone.
  • the second communication module 12 further includes: an e-SIM card module.
  • the E-SIM card is embedded in the internal body 10. It is no longer necessary for the user to purchase the device and insert the card. You can use your own carrier network and package directly by software registration or direct purchase. Designing a separate SIM card slot allows the body 10 to have a lighter, thinner fuselage and lower manufacturing costs.
  • the earphone communication module 21 and the first communication module 11 can be wirelessly connected through a wireless connection such as WiFi, Bluetooth, or infrared.
  • the earphone communication module 21 and the first communication module 11 can be wirelessly connected by Bluetooth, that is, the earphone communication module 21 and the first communication module 11 include a Bluetooth module.
  • the earphone 20 can be a TWS earphone.
  • the main earphone is placed in the left earphone slot 112a, and the earphone is placed in the right earphone slot 112b.
  • the main earphone obtains the voice signal (including the call voice, the text message voice signal, and the network data sent back by the cloud server 8) sent by the body 10 through the earphone communication module 21 to obtain the voice signal.
  • the main earphone sends the voice signal to the main earphone speaker for playback, and forwards the voice signal to the slave earphone by means of near field communication such as microwave communication.
  • the main earphone or the slave earphone can pick up the user voice signal and send it to the body 10 through the first communication module 11, and the body processor 19 denoises the main earphone and the user voice respectively picked up from the earphone. , compare, merge into a voice signal.
  • the second communication module 12 can also be used to implement a wireless communication connection between the wristband-type AI voice interaction device and the existing smart home device, thereby further implementing control of the smart home device through AI voice, so that the wristband type
  • the AI voice interactive device becomes a "remote control" for smart homes.
  • the earphone 20 picks up a voice command spoken by the user, and the earphone processor 29 sends a voice command to the first communication module 11 of the body through the earphone communication module 21 in response to the voice command, and the body processor 19 responds to the voice command.
  • the second communication module 12 sends to the smart home device, so that the smart home performs operations such as power on, power off, or temperature adjustment; or the microphone array 17 picks up a voice command spoken by the user, and the body processor 19 responds to the voice command through the first
  • the two communication modules 12 are sent to the smart home device, so that the smart home performs operations such as power on, power off, or temperature adjustment.
  • the body 10 is further provided with a body touch key 104 and a body LED 103 connected to the body processor 19, and the body LED 103 includes a power indicator light, a SIM card light, a WiFi light, and a voice light.
  • a body touch key 104 and a body LED 103 connected to the body processor 19, and the body LED 103 includes a power indicator light, a SIM card light, a WiFi light, and a voice light.
  • the battery indicator light is set to 4 grids, when the battery is displayed.
  • the power is greater than 75% and less than or equal to 100%, and the four grid lights are all bright;
  • the power is greater than 50% and less than or equal to 75%, and the three-cell lamp is lit;
  • the power is greater than 25% and less than or equal to 50%, and the two lights are lit;
  • the power is greater than 10% and less than or equal to 25%, and one grid is lit;
  • the power is less than or equal to 10%, and one lamp breathes.
  • a green light indicates a signal, a flashing indicates a search, and a non-light indicates no service;
  • a green light indicates a signal, a flashing indicates a search, and a non-light indicates no service;
  • WiFi traffic indicates that the green light of the WiFi is breathing
  • SIM card traffic indicates that the green card of the SIM card is breathing
  • WiFi is preferentially used.
  • the earphone 20 is also provided with an earphone touch key 204 and an earphone LED 203 that are coupled to the earphone processor 29.
  • the earphone 20 when the earphone 20 is placed in the earphone slot 112 for charging, the earphone 20 is in an inactive sleep state, however, if the microphone array 17 is damaged and cannot be used, the standby mode can be turned on. That is, the earphone 20 is woken up by the earphone touch key 204, so that the pickup 27 of the earphone 20 picks up the user's voice, and the earphone 20 transmits the picked up user voice analog-digital converted digital voice signal to the first communication module 11, and the body processor 19 responds to the The digital voice signal is communicated with the cloud server 8 through a mobile data network module (3G/4G/5G) or a WiFi transceiver, and the digital voice signal is sent to the cloud server 8 and accessed to the cloud server 8 for data interaction.
  • a mobile data network module (3G/4G/5G) or a WiFi transceiver
  • the first speaker 118 is still selected for voice broadcast, that is, the voice signal received by the cellular transceiver, the short message voice information signal, and the network data sent by the cloud server 8, the ontology.
  • the processor 19 is not sent to the earphone 20, but is sent directly to the first speaker 118 such that the first speaker 118 performs a voice announcement.
  • the earphone 20 when the earphone 20 is taken out from the earphone slot 112, the earphone 20 can cause the earphone 20 to send a signal for searching the body 10 through the earphone communication module 21, and the body 20 receives the search body through the first communication module 11.
  • the signal will cause all the indicator lights of the body LED 103 to flash at a high frequency and the first speaker 118 to emit a prompt tone, thereby facilitating the user to retrieve the lost body 20; by pressing and pressing the body touch key 104, the body 10 can pass the first communication module. 11 sends a signal for searching for the earphone.
  • the wristband type AI voice interactive device of the present application has a retrieving function to avoid loss of the earphone or the charging stand (body 10). Therefore, compared with the existing earphone charging stand (including the existing TWS earphone charging stand), the wristband type AI voice interactive device is not only convenient to carry, but also has a retrieving function, so that the earphone and the charging stand are not easily lost.
  • the wristband-type AI voice interactive device of the present application places the earphone in the body, which can make the TWS earphone and its charging stand easy to carry; and because the TWS earphone can be combined with the functions of making a call, sending information, and surfing the Internet.
  • the charging base (body) is connected, so that the TWS earphone is no longer dependent on the mobile phone; and the wristband type AI voice interactive device of the present application has the functions of making a call, sending information, and surfing the Internet, and is replaced by AI voice interaction.
  • the interactive mode of mobile phone screen operation can meet people's daily needs, making people reduce their dependence on mobile phones, and can reduce people's eye time and protect eyesight.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the present application further provides a portable intelligent voice interaction system, comprising: a portable intelligent voice interaction control device as in Embodiment 1 and a cloud server, wherein the portable intelligent voice interaction control device communicates with the cloud server 8 connection.
  • the method for data interaction of the portable intelligent voice interaction system includes:
  • the user voice is picked up by the pickup of the earphone 27, and the picked-up user voice is subjected to analog-to-digital conversion via the earphone processor 29, and the digital voice signal obtained after the analog-to-digital conversion is sent to the first communication module 11 through the earphone communication module 21,
  • the body processor 19 sends a login access request and the digital voice signal to the cloud server 8 through the second communication module 12 in response to the digital voice signal, and logs in to the interface of the corresponding cloud server 8.
  • the cloud server 8 After the analysis is completed, the cloud server 8 calls corresponding network data, such as performing a Baidu search to obtain search results, or calling a network music player to obtain song audio resources, or using a high-tech map search, planning a path, etc. to navigation data, etc. And sending the network data to the portable intelligent voice interactive control device;
  • the voice broadcast stage after receiving the network data sent by the cloud server 8 through the second communication module 12, the portable intelligent voice interaction control device sends the network data to the earphone 20 through the first communication module 11, so that the earphone 20 utilizes the network data according to the network data.
  • the second speaker 28 performs a corresponding voice announcement.
  • the method for data interaction of the portable intelligent voice interaction system includes:
  • the voice recognition stage directly picks up the voice signal spoken by the user through the microphone array 17, converts it into a digital voice signal, and then communicates with the cloud server 8 through the second communication module (the internet module, the WiFi transceiver) of the mobile data. Sending the digital voice signal to the cloud server 8 and accessing the cloud server 8, and logging in to the interface of the corresponding cloud server 8, performing voice recognition and semantic analysis on the digital voice signal;
  • the second communication module the internet module, the WiFi transceiver
  • the cloud server 8 After the analysis is completed, the cloud server 8 calls corresponding network data, such as performing a Baidu search to obtain search results, or calling a network music player to obtain song audio resources, or using a high-tech map search, planning a path, etc. to navigation data, etc. And sending the network data to the portable intelligent voice interactive control device;
  • the voice broadcast phase the portable intelligent voice interaction control device receives the network data sent by the cloud server 8 through the second communication module 12, and then causes the body processor 19 to perform the corresponding voice broadcast by using the first speaker 118 according to the network data.
  • the process of data interaction of the portable intelligent voice interaction system includes:
  • the user voice is picked up by the pickup of the earphone 27, and the picked-up user voice is subjected to analog-to-digital conversion via the earphone processor 29, and the digital voice signal obtained after the analog-to-digital conversion is sent to the first communication module 11 through the earphone communication module 21,
  • the body processor 19 sends a login access request and the digital voice signal to the cloud server 8 through the second communication module 12 in response to the digital voice signal, and logs in to the interface of the corresponding cloud server 8.
  • the cloud server 8 After the analysis is completed, the cloud server 8 calls corresponding network data, such as performing a Baidu search to obtain search results, or calling a network music player to obtain song audio resources, or using a high-tech map search, planning a path, etc. to navigation data, etc. And sending the network data to the portable intelligent voice interactive control device;
  • the voice broadcast phase the portable intelligent voice interaction control device receives the network data sent by the cloud server 8 through the second communication module 12, and then causes the body processor 19 to perform the corresponding voice broadcast by using the first speaker 118 according to the network data.
  • the present application further provides a portable intelligent voice interaction method, including:
  • the pickup 27 of the earphone 20 picks up the user's voice, and the picked-up user voice performs analog-to-digital conversion through the earphone processor 29.
  • the digital voice signal obtained after the analog-to-digital conversion is sent to the first through the earphone communication module 21.
  • a communication module 11 after receiving the digital voice signal, the first communication module 11 responds to the digital voice signal, and sends a login access request and the digital voice signal to the cloud server 8 through the second communication module 12, and logs in.
  • the interface of the corresponding cloud server performs voice recognition and semantic analysis on the digital voice signal; after the analysis is completed, the cloud server 8 calls the corresponding network data, and sends the network data to the second communication module 12, and the second communication module 12 receives the network data.
  • the body processor 19 is forwarded to the earphone communication module 21 through the second communication module 12 in response to the network data.
  • the earphone processor 29 responds to the network data and passes the second speaker 28 according to the The network data is correspondingly broadcasted;
  • the microphone array 17 of the body 10 picks up the user's voice, and the picked up user voice is subjected to analog-to-digital conversion by the body processor 19; the digital voice signal obtained after the analog-to-digital conversion passes through the second communication module 12 Sending the login access request and the digital voice signal to the cloud server 8, and logging in the interface of the corresponding cloud server, performing voice recognition and semantic analysis on the digital voice signal; after the analysis is completed, the cloud server 8 calls the corresponding network data, and sends the network data.
  • the second communication module 12 receives the network data
  • the body processor 19 responds to the network data, and performs corresponding voice broadcast according to the network data through the first speaker 118;
  • the earphone 8 After the standby mode is turned on, the earphone 8 is placed in the earphone slot 112 for charging, the pickup 27 of the earphone 20 picks up the user's voice, the picked user voice is subjected to analog-to-digital conversion through the earphone processor 29, and the digital voice signal obtained after the analog-to-digital conversion is communicated through the earphone.
  • the module 21 is sent to the first communication module 11; after receiving the digital voice signal, the first communication module 11 responds to the digital voice signal, and sends a login access request and the number to the cloud server 8 through the second communication module 12.
  • the voice signal is registered in the interface of the corresponding cloud server, and the voice recognition and semantic analysis are performed on the digital voice signal; after the analysis is completed, the cloud server 8 calls the corresponding network data, and sends the network data to the second communication module 12, and the second communication module After receiving the network data, the body processor 19 responds to the network data, and performs corresponding voice broadcast according to the network data through the first speaker 118.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc.
  • the computer executes the program to implement the above functions.
  • the program is stored in the memory of the device, and when the program in the memory is executed by the processor, all or part of the above functions can be realized.
  • the program may also be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk or a mobile hard disk, and may be saved by downloading or copying.
  • the system is updated in the memory of the local device, or the system of the local device is updated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Telephone Function (AREA)
  • Selective Calling Equipment (AREA)

Abstract

一种便携式智能语音交互控制设备,包括本体以及与本体可拆卸连接的耳机,本体包括本体外壳和旋转盖,本体外壳表面嵌入式设置有用于放置耳机的耳机槽,耳机可放置于耳机槽内与本体电连接进行充电,水平旋转旋转盖可使得耳机从取物孔取出,耳机取出后,通过无线通讯与本体通讯连接。由于本申请的便携式智能语音交互设备不仅可通过耳机的拾音器拾取用户语音,还可通过麦克风阵列拾取用户语音,实现人与设备的语音交互更加方便、灵活,该设备兼具普通Echo Dot设备的功能,并可实现拨打电话、发送短信功能和上网功能,功能更加丰富全面的,可满足人们的日常需要,在一定程度上可以取代手机的功能,减少人们的用眼时间,保护视力。

Description

一种便携式智能语音交互控制设备、方法及系统 技术领域
本发明涉及智能语音设备技术领域,具体涉及一种便携式智能语音交互控制设备、方法及系统。
背景技术
Echo(Echo Protocol,应答协议),Echo是路由也是网络中最常用的数据包,可以通过发送echo包知道当前的连接节点有那些路径,并且通过往返时间能得出路径长度。亚马逊推出的Echo Dot可以听音、播音、与其他设备连接并控制它们,Echo Dot,本质上就是Echo的"Mini-Me",是一种基于云端且通过语音激活的AI语音交互装置,可以理解为时一个有实体、随时可呼唤的Siri,用户通过说出她的名字或其他关键词来唤醒Echo Dot功能。Echo Dot内置的麦克风阵列时刻处于就绪状态,监听来自四周的声音。一旦识别到关键词,这些麦克风将会记录下接下来的任何话语,然后将它们发送至亚马逊的云端服务器,这些服务器对语音命令进行识别处理之后,会返回指令告知Alexa如何做出回应。
现有的Echo Dot往往自身没有配备扬声器,需要提供的插孔和连接线来与已有的扬声器连接,或者通过蓝牙方式与已有的扬声器连接;而且,Dot的扬声器阵列在拾取语音命令方面表现略差,特别是正在播放音乐或处于嘈杂环境中时,用户需要缩短自己与EchoDot的距离才能完成激活,否则可能无法快速识别命令,用户体验不好,给使用带来不便。
发明内容
本申请提供一种便携式智能语音交互控制设备、方法及系统,对用户与Echo Dot设备的距离无特别要求,在播放音乐或处于嘈杂环境中也可以进行快速的语音识别,用户体验较好。
根据第一方面,本申请提供一种便携式智能语音交互控制设备,包括本体以及与所述本体可拆卸连接的耳机,所述本体包括本体外壳和旋转盖,所述旋转盖扣于本体外壳之上,本体外壳表面嵌入式设置有用于放置耳机的耳机槽,耳机槽内设置有电能输出端,耳机上设置有电能输入端,耳机可放置于所述耳机槽内,通过所述电能输出端和所述电能输入端实现与本体电连接进行充电,所述旋转盖上设置有取物孔,水平旋转所述旋转盖可使得耳机从所述取物孔露出便于耳机取出;所述耳机从耳机槽取出后,通过无线通讯与所述本体通讯连接;
所述本体外壳包括:本体上壳、透声壳体和本体下壳,透声壳体设置于本体上壳与本体下壳之间,所述透声壳体上设置有网状透声孔;所述透声壳体与本体上壳连接处还设置有跑马灯圈,所述跑马灯圈外侧设置有透光装饰带。
在一些实施例,所述本体外壳内设置有麦克风阵列、第一通讯模块、第二通讯模块、第一扬声器和本体处理器,所述本体处理器分别与麦克风阵列、第一通讯模块、第二通讯模块、第一扬声器电连接,所述第二通讯模块包括移动数据上网模块、蜂窝收发器和WiFi收发器;所述耳机为入耳式,所述耳机包括设置于端部的入耳软胶和耳机外壳,耳机外壳内设置有耳机通讯模块、耳机处理器、第二扬声器和拾音器,耳机处理器分别与耳机通讯模块、第二扬声器和拾音器电连接,所述耳机通讯模块与第一通讯模块无线连接。
在一些实施例,所述耳机通讯模块与第一通讯模块可通过WiFi、蓝牙或红外无线连接方式实现无线连接;所述耳机包括:TWS耳机、经典立体声蓝牙耳机或经典单边蓝牙耳机。
在一些实施例,所述本体外壳内还设置有与所述本体处理器电连接的本体储能电路,所述本体储能电路还连接有充电模块、电能输出电路,所述电能输出电路还连接有充电模块、电能输出端,所述充电模块包括无线充电模块或USB接口充电模块;所述耳机外壳内还设置有与耳机处理器相连接的耳机储能器电路,所述耳机储能电路还连接有电能输入端。
在一些实施例,所述本体还设置有与所述本体处理器相连接的本体触摸键和本体LED,所述耳机上还设置有与所述耳机处理器相连接的耳机触摸键和耳机LED。
在一些实施例,所述第二通讯模块还包括:e-SIM卡模块。
根据第二方面,本申请提供一种便携式智能语音交互系统,包括:如第一方面所述的便携式智能语音交互控制设备以及云服务器,所述便携式智能语音交互控制设备与所述云服务器通讯连接。
根据第三方面,本申请提供一种便携式智能语音交互方法,该方法包括:
耳机从耳机槽内取出时,耳机的拾音器拾取用户语音,拾取的用户语音通过耳机处理器进行模数转换,模数转换后得到的数字语音信号通过耳机通讯模块发送至第一通讯模块;
第一通讯模块接收到所述数字语音信号后,本体处理器响应所述数字语音信号,通过第二通讯模块向云服务器发送所述数字语音信号,并登录相应云服务器的接口,对数字语音信号进行语音识别及语义分析;
分析完成后,云服务器调用对应网络数据,并将网络数据发送至第二通讯模块,第二通讯模块接收网络数据后,本体处理器响应网络数据,通过第二通讯模块转发至耳机通讯模块,耳机通讯模块接收到网络数据后,耳机处理器响应网络数据,通过第二扬声器根据网络数据进行相应的语音播报;或者
耳机放置于耳机槽内充电时,本体的麦克风阵列拾取用户语音,拾 取的用户语音通过本体处理器进行模数转换;
模数转换后得到的数字语音信号通过第二通讯模块向云服务器发送所述数字语音信号,并登录相应云服务器的接口,对数字语音信号进行语音识别及语义分析;
分析完成后,云服务器调用对应网络数据,并将网络数据发送至第二通讯模块,第二通讯模块接收网络数据后,本体处理器响应网络数据,通过第一扬声器根据网络数据进行相应的语音播报;或者
耳机放置于耳机槽内充电时,耳机的拾音器拾取用户语音,拾取的用户语音通过耳机处理器进行模数转换,模数转换后得到的数字语音信号通过耳机通讯模块发送至第一通讯模块;
第一通讯模块接收到所述数字语音信号后,本体处理器响应所述数字语音信号,通过第二通讯模块向云服务器发送所述数字语音信号,并登录相应云服务器的接口,对数字语音信号进行语音识别及语义分析;
分析完成后,云服务器调用对应网络数据,并将网络数据发送至第二通讯模块,第二通讯模块接收网络数据后,本体处理器响应网络数据,通过第一扬声器根据网络数据进行相应的语音播报。
根据第四方面,本申请提供一种计算机可读存储介质,其特征在于,包括程序,所述程序能够被处理器执行以实现如第三方面所述的方法。
依据上述实施例,由于本申请的便携式智能语音交互设备不仅可通过耳机的拾音器拾取用户语音,还可通过麦克风阵列拾取用户语音,实现人与设备的语音交互更加方便、灵活;其中,由于耳机与本体无线通讯连接,对用户与Echo Dot设备的距离无特别要求,又由于佩戴与耳朵上的耳机比单独放置的Echo Dot设备拾音效果好很多,在播放音乐或处于嘈杂环境中也可以进行快速的语音识别,用户体验较好;本申请的便携式智能语音交互设备兼具普通Echo Dot设备的功能,另外,由于本申请的便携式智能语音交互设备还可实现拨打电话、发送短信功能和上网功能,功能更加丰富全面的,可满足人们的日常需要,在一定程度上可以取代手机的功能,减少人们的用眼时间,保护视力。
附图说明
图1为本申请提供的一种便携式智能语音交互控制设备及系统示意图;
图2为一种实施例提供的便携式智能语音交互控制设备及系统结构框图;
图3为一种实施例提供的便携式智能语音交互控制设备的立体图;
图4为一种实施例提供的便携式智能语音交互控制设备取耳机操作示意图;
图5为一种实施例提供的便携式智能语音交互控制设备的侧面透视 图一;
图6为一种实施例提供的便携式智能语音交互控制设备的侧面透视图二;
图7为一种实施例提供的便携式智能语音交互控制设备爆炸图;
图8为一种实施例提供的耳机透视图;
图9为一种实施例提供的便携式智能语音交互系统数据交互过程示意图;
图10为另一种实施例提供的便携式智能语音交互系统数据交互过程示意图;
图11为又一种实施例提供的备用方式下便携式智能语音交互系统数据交互过程示意图。
附图标记:云服务器8,本体10,第一通讯模块11,第二通讯模块12,本体储能电路13,电能输出电路14,电能输出端15,位置传感器16,麦克风阵列17,本体处理器19,耳机20,耳机通讯模块21,耳机储能电路23,电能输入端25,拾音器27,第二扬声器28,耳机处理器29,本体PCB100,本体外壳101,本体LED103,本体触摸键104,本体磁吸装置105,透光装饰带108,旋转盖109,耳机槽112,左耳机槽112a,右耳机槽112b,跑马灯圈113,第一扬声器118,本体电池130,充电模块138,耳机PCB200,耳机外壳201,入耳软胶202,耳机LED203,耳机触摸键204,耳机磁吸装置205,拾音孔206,上盖板207,耳机电池230,本体上壳1011,透声壳体1012,本体下壳1013。
具体实施方式
下面通过具体实施方式结合附图对本发明作进一步详细说明。其中不同实施方式中类似元件采用了相关联的类似的元件标号。在以下的实施方式中,很多细节描述是为了使得本申请能被更好的理解。然而,本领域技术人员可以毫不费力的认识到,其中部分特征在不同情况下是可以省略的,或者可以由其他元件、材料、方法所替代。在某些情况下,本申请相关的一些操作并没有在说明书中显示或者描述,这是为了避免本申请的核心部分被过多的描述所淹没,而对于本领域技术人员而言,详细描述这些相关操作并不是必要的,他们根据说明书中的描述以及本领域的一般技术知识即可完整了解相关操作。
另外,说明书中所描述的特点、操作或者特征可以以任意适当的方式结合形成各种实施方式。同时,方法描述中的各步骤或者动作也可以按照本领域技术人员所能显而易见的方式进行顺序调换或调整。因此,说明书和附图中的各种顺序只是为了清楚描述某一个实施例,并不意味着是必须的顺序,除非另有说明其中某个顺序是必须遵循的。
本文中为部件所编序号本身,例如“第一”、“第二”等,仅用于区 分所描述的对象,不具有任何顺序或技术含义。而本申请所说“连接”、“联接”,如无特别说明,均包括直接和间接连接(联接)。
实施例一:
请参考图1至图8,本申请提供一种便携式智能语音交互设备,包括本体10以及与本体可拆卸连接的耳机20。
本体10包括本体外壳101和旋转盖109(如图4所示为圆形),旋转盖109扣于本体外壳101之上,本体外壳101表面嵌入式设置有用于放置耳机的耳机槽112,耳机槽112内设置有电能输出端15,耳机20上设置有电能输入端25,耳机20可放置于耳机槽112内,通过电能输出端15和电能输入端25实现与本体10电连接进行充电,旋转盖109上设置有取物孔,水平旋转旋转盖109可使得耳机20从该取物孔露出便于耳机20取出(如图4所示);耳机20从耳机槽112取出后,通过无线通讯与本体10通讯连接。
如图5和图7所示,本体外壳101包括:本体上壳1011、透声壳1012体和本体下壳1013,透声壳体1012设置于本体上壳1011与本体下壳1013之间,该透声壳体1012上设置有网状透声孔;该透声壳体1012与本体上壳1011连接处还设置有跑马灯圈113,跑马灯圈113外侧设置有透光装饰带108。
如图2所示,本体外壳101内设置有麦克风阵列17、第一通讯模块11、第二通讯模块12、第一扬声器118和本体处理器19,本体处理器19分别与麦克风阵列17、第一通讯模块11、第二通讯模块12、第一扬声器118电连接,第二通讯模块12包括移动数据上网模块、蜂窝收发器和WiFi收发器。麦克风阵列17用于拾取用户说出的语音信号,所述语音信号经本体处理器19处理(模数转换)后,转换为相应的数字语音信号经第二通讯模块21发出。
如图2、图6和图8所示,耳机20为入耳式,耳机20包括设置于端部的入耳软胶202和耳机外壳201,耳机外壳201内设置有耳机通讯模块21、耳机处理器29、扬声器28和拾音器27,耳机处理器29分别与耳机通讯模块21、扬声器28和拾音器27电连接,该耳机通讯模块21与第一通讯模块11无线连接。拾音器27用于拾取用户说出的语音信号,所述语音信号经耳机处理器29处理(模数转换)后,转换为相应的数字语音信号经耳机通讯模块21发出。
本申请的便携式智能语音交互设备在使用时具有两种使用方式:分体使用方式和合体使用方式。
分体使用方式时,即当耳机20从耳机槽112取出时,本体处理器19通过第一无线通讯模块11获取耳机通讯模块21发出的数字语音信号,并响应该数字语音信号,通过移动数据上网模块(3G/4G/5G)或 WiFi收发器与云服务器8进行通讯连接,将该数字语音信号发送至云服务器8并访问云服务器8,对该数字语音信号进行语音识别和语音分析,若分析的结果与拨打电话、发送短信有关,将分析结果发送回便携式智能语音交互,再通过蜂窝收发器利用现有技术实现拨打电话、发送短信等功能,否则,由服务器进行相应的互联网数据处理。
合体使用方式时,即当耳机20放置于耳机槽112内充电时,本体处理器19通过麦克风阵列17直接拾取用户说出的语音信号,并将其转换为数字语音信号,通过移动数据上网模块(3G/4G/5G)或WiFi收发器与云服务器8进行通讯连接,将该数字语音信号发送至云服务器8并访问云服务器8。服务器8对该数字语音信号进行语音识别和语音分析,若分析的结果与拨打电话、发送短信有关,将分析结果发送回便携式智能语音交互设备,使得便携式智能语音交互设备再通过蜂窝收发器利用现有技术实现拨打电话、发送短信等功能;否则,由服务器进行相应的互联网数据处理,调用对应网络数据,并将该网络数据发送回腕带式AI语音交互装置,腕带式AI语音交互装置接收该网络数据后,使得耳机根据该网络数据进行相应的语音播报。
例如:云服务器8可启动程序以进行互联网搜索(如百度搜索)并将搜索结果发回便携式智能语音交互设备并以语音形式播报、或启动程序以调用网络音乐播放器资源进行音乐播放、或启动道路导航应用程序(如高德地图)进行导航、或启动程序以点播音频节目等等。
其中,数据交互过程具体为:处理器19将数字语音信号通过第二无线通讯模块12发送至云服务器8,登录相应云服务器8的接口,对数字语音信号进行语音识别及语义分析;分析完成后,云服务器8根据相应的结果调用对应网络数据,并将该网络数据发送至便携式智能语音交互设备,便携式智能语音交互设备接收该网络数据,根据该网络数据进行相应的语音播报。
另外,便携式智能语音交互设备接收该网络数据后,根据该网络数据进行相应的语音播报时在分体使用方式和合体使用方式不太相同。
分体使用方式时,便携式智能语音交互设备通过第二通讯模块12接收网络数据后,本体处理器19响应该网络数据,通过第二通讯模块12转发至耳机通讯模块21,耳机通讯模块21接收到网络数据后,耳机处理器29响应网络数据,通过第二扬声器28根据网络数据进行相应的语音播报。
合体使用方式时,便携式智能语音交互设备通过第二通讯模块12接收网络数据后,本体处理器19响应该网络数据,通过第一扬声器118根据网络数据进行相应的语音播报。
参考图2和图7,本体外壳101内还设置有与本体处理器19电连接 的本体储能电路13,以及与本体储能电路13电连接的充电模块138、电能输出电路14,电能输出电路14还连接有充电模块138、电能输出端15,本体储能电路13还连接有本体电池130。
上述结构中,麦克风阵列17、第一通讯模块11、第二通讯模块12、第一扬声器118、本体处理器19、本体储能电路13、电能输出电路14、电能输出端15和充电模块138构成了本体10的基本组成结构。如图2和图8所示,在一些实施例中,麦克风阵列17、第一通讯模块11、第二通讯模块12、本体处理器19、本体储能电路13和电能输出电路14设置于本体PCB100上。
参考图2、图6和图8,耳机外壳201内还设置有与耳机处理器19电连接的耳机储能电路23,以及与耳机储能电路23电连接的电能输入端25,耳机储能电路23还连接有耳机电池230。耳机的电能输入端25与本体的电能输出端15与相匹配,可以包括但不限于金属触点、金属薄面或金属公母插接头等形式,图6中显示的为金属触点形式。
上述结构中,耳机通讯模块21、耳机处理器29、第二扬声器28、拾音器27、耳机储能电路23和电能输入端25构成了耳机20的基本组成。如图2和图5所示,在一些实施例中,上述结构中,耳机通讯模块21、耳机处理器29、拾音器27和耳机储能电路23设置于耳机PCB200上。
在一些实施例中,充电模块138包括现有的无线充电模块或传统的USB接口充电模块。
耳机充电时,当耳机10放置于耳机槽内时,若充电模块138未与外部电源相连接,本体10通过本体电池130、电能输出电路14和电能输出端15向电能输入端25传输电能,使得耳机20获得充电;若充电模块138与外部电源相连接的情况下,本体10优先利用电能输出电路14和电能输出端15向电能输入端25传输电能,在耳机20充电完成后,再利用本体储能电路13向本体电池130充电。
由此可见,由于具有本体10和耳机20这样的基本组成结构,便携式智能语音交互设备可通过耳机20的拾音器27拾取用户语音,也可通过麦克风阵列17拾取用户语音,实现人与设备的语音交互更加方便、灵活;其中,由于耳机20与本体10无线通讯连接,对用户与Echo Dot设备的距离无特别要求(只需要在可进行无线通讯的氛围内即可),又由于佩戴与耳朵上的耳机比单独放置的Echo Dot设备拾音效果好很多,在播放音乐或处于嘈杂环境中也可以进行快速的语音识别,用户体验较好,本申请的便携式智能语音交互设备还可通过本体的麦克风阵列17拾取用户语音和本体的第一扬声器118进行语音播报,从而兼具了普通Echo Dot设备的功能,另外,还可通过第二通讯模块12实现拨打电话、发送 短信功能和上网功能,功能更丰富全面,可满足人们的日常需要,在一定程度上可以取代手机的功能。
需要指出的是,由于大量的数据处理和数据分析工作是在云端由云服务器完成的,并借助移动数据通讯(3G/4G/5G)或WiFi通讯将网络数据资源发回便携式智能语音交互设备,使得便携式智能语音交互设备不需要强大计算能力的处理器、也不需要昂贵的显示屏,而只需要较好的通讯能力即可,相比手机,不仅可节省便携式智能语音交互设备的硬件成本,而且使用更加方便、智能,可使得人们离开对手机的依赖。
参考图3至图6,在一些实施例,放置于耳机槽112内的耳机20通过磁吸、扣合或卡合方式实现与本体10可拆卸连接,图5至图8中显示的为磁吸形式,耳机磁吸装置205与本体磁吸装置105互相吸附。
在一些实施例,本体外壳101内还设置有与本体处理器19电连接的位置传感器,位置传感器用于检测耳机20在耳机槽112内放置的位置是否准确,避免耳机的电能输入端25与本体的电能输出端15“虚接”,影响耳机的充电效果。
参考图11,在一些实施例,第二通讯模块12还包括:e-SIM卡模块。E-SIM卡内嵌于本体10内部,不再需要用户购买设备后自己插卡,可直接采用软件注册或者直接购买等类型的方式就可以使用自己的运营商网络和套餐,由于不再需要专门设计一个独立的SIM卡槽,使得本体10具有更轻、更薄的机身,制造成本也更低。
在一些实施例,耳机通讯模块21与第一通讯模块11可通过WiFi、蓝牙或红外等无线连接方式实现无线连接。优选地,在一种实施例,耳机通讯模块21与第一通讯模块11可蓝牙实现无线连接,即耳机通讯模块21与第一通讯模块11包括蓝牙模块,此时,耳机20可以为TWS耳机、经典立体声蓝牙耳机或经典单边蓝牙耳机。
当耳机20为TWS耳机(参考图3至图6),主耳机放置于左耳机槽112a内,从耳机放置于右耳机槽112b内。
在语音播放时,主耳机通过耳机通讯模块21来获取本体10利用第一通讯模块11所发出的语音信号(包括通话语音、短信文本语音信号和云服务器8发送回的网络数据等),获得该语音信号后,主耳机将语音信号送入主耳机扬声器播放,并用微波通信等近场通讯的方式将语音信号转发给从耳机。
在拾取用户语音信号时,主耳机或从耳机均可拾取用户语音信号,并通过第一通讯模块11发送至本体10,本体处理器19会将主耳机、从耳机分别拾取的用户语音进行去噪、比较,融合为一个语音信号。
在一些实施例,第二通讯模块12还可用于实现腕带式AI语音交互装置与现有的智能家居装置无线通讯连接,从而进一步,通过AI语音实 现对智能家居装置的控制,使得腕带式AI语音交互装置成为一个智能家居的“遥控器”。具体地,耳机20拾取用户说出的语音命令,耳机处理器29响应该语音命令,通过耳机通讯模块21将语音命令发送至本体的第一通讯模块11,本体处理器19响应该语音命令,通过第二通讯模块12发送至与智能家居装置,使得智能家居执行诸如开机、关机或调温等操作;或者,麦克风阵列17拾取用户说出的语音命令,本体处理器19响应该语音命令,通过第二通讯模块12发送至与智能家居装置,使得智能家居执行诸如开机、关机或调温等操作。
在一些实施例,本体10还设置有与本体处理器19相连接的本体触摸键104和本体LED103,本体LED103包括电量显示灯、SIM卡灯、WiFi灯和语音灯。例如:
(1)电量显示灯设置为4格,显示电量时,
1)电量大于75%并且小于等于100%,四格灯全亮;
2)电量大于50%并且小于等于75%,三格灯点亮;
3)电量大于25%并且小于等于50%,两格灯点亮;
4)电量大于10%并且小于等于25%,一格灯点亮;
5)电量小于等于10%,一格灯呼吸。
(2)SIM卡灯状态指示时,
绿灯亮表示有信号,闪烁表示搜网,不亮表示无服务;
(3)WiFi灯状态指示时,
绿灯亮表示有信号,闪烁表示搜网,不亮表示无服务;
其中,有数据传输时,使用WiFi流量则指示WiFi的绿灯呼吸,使用SIM卡流量则指示SIM卡的绿灯呼吸,优先使用WiFi。
(4)语音灯状态指示时,
唤醒后亮绿灯,搜索时绿灯闪烁,播报时绿灯呼吸。
在一些实施例,耳机20还设置有与耳机处理器29相连接的耳机触摸键204和耳机LED203。
在一些实施例,当耳机20放置于耳机槽112内充电时,耳机20处于不工作的休眠状态,不过,若麦克风阵列17损坏不能使用时,可开启备用方式。即:通过耳机触摸键204唤醒耳机20,使得耳机20的拾音器27拾取用户语音,耳机20将拾取的用户语音模数转换后的数字语音信号发送至第一通讯模块11,本体处理器19响应该数字语音信号,通过移动数据上网模块(3G/4G/5G)或WiFi收发器与云服务器8进行通讯连接,将该数字语音信号发送至云服务器8并访问云服务器8,进行数据交互。需要指出的是,在语音播报时,仍然选择第一扬声器118进行语音播报,也就是说,对于蜂窝收发器收到的通话语音信号、短信语音信息信号以及云服务器8下发的网络数据,本体处理器19不是发送 至耳机20,而是直接发送至第一扬声器118,使得第一扬声器118进行语音播报。
在一些实施例,当耳机20从耳机槽112内取出,通过长按耳机触摸键204可使得耳机20通过耳机通讯模块21发出搜寻本体10的信号,本体20通过第一通讯模块11接收到搜寻本体的信号时,将使得本体LED103全部的指示灯高频闪烁以及第一扬声器118发出提示音,从而方便用户找回丢失的本体20;通过长按本体触摸键104可使得本体10通过第一通讯模块11发出搜寻耳机的信号,耳机20通过耳机通讯模块21接收到搜寻信号时,将使得耳机LED204全部的指示灯高频闪烁以及第二扬声器28发出提示音,从而方便用户找回丢失的耳机。由此可见,本申请的腕带式AI语音交互装置具有找回功能,可避免耳机或充电座(本体10)丢失。因此,相比于现有的耳机充电座(包括现有的TWS耳机充电座),腕带式AI语音交互装置不仅携带方便,而且因为具有找回功能,使得耳机和充电座不易丢失。
综上所述,本申请的腕带式AI语音交互装置将耳机放置于本体内,可使得TWS耳机及其充电座携带方便;并由于TWS耳机可与具有拨打电话、发送信息和上网等功能的充电座(本体)相连接,使得TWS耳机不再依赖于手机就可使用;而且由于本申请的腕带式AI语音交互装置具有拨打电话、发送信息和上网等功能,并通过AI语音交互取代了手机屏幕操作的交互方式,可满足人们的日常需求,使得人们减少对手机的依赖,并可减少人们的用眼时间,保护视力。
实施例二:
请参考图1和图2,本申请还提供一种便携式智能语音交互系统,包括:如实施例一中的便携式智能语音交互控制设备以及云服务器,该便携式智能语音交互控制设备与云服务器8通讯连接。
参考图9和图10,如实施例一所述的分体使用方式,,该便携式智能语音交互系统数据交互的过程包括:
语音识别阶段:通过耳机27的拾音器拾取用户语音,拾取的用户语音经耳机处理器29进行模数转换,模数转换后得到的数字语音信号通过耳机通讯模块21发送至第一通讯模块11,第一通讯模块11接收到该数字语音信号后,本体处理器19响应该数字语音信号,通过第二通讯模块12向云服务器8发送登录访问请求和该数字语音信号,并登录相应云服务器8的接口,对该数字语音信号进行语音识别及语义分析;
数据调用阶段:分析完成后,云服务器8调用对应网络数据,比如进行百度搜索得到搜索结果,或调用网络音乐播放器得到歌曲音频资源、或使用高德地图搜索、规划路径等到导航数据等等,并将这些网络数据发送至便携式智能语音交互控制装置;
语音播报阶段:便携式智能语音交互控制装置通过第二通讯模块12接收到云服务器8下发的网络数据后,将网络数据通过第一通讯模块11发送至耳机20,使得耳机20根据该网络数据利用第二扬声器28进行相应的语音播报。
或者,参考图10,如实施例一所述的合体使用方式,该便携式智能语音交互系统数据交互的过程包括:
语音识别阶段:通过麦克风阵列17直接拾取用户说出的语音信号,并将其转换为数字语音信号,再通过移动数据第二通讯模块(上网模块、WiFi收发器)与云服务器8进行通讯连接,将该数字语音信号发送至云服务器8并访问云服务器8,并登录相应云服务器8的接口,对该数字语音信号进行语音识别及语义分析;
数据调用阶段:分析完成后,云服务器8调用对应网络数据,比如进行百度搜索得到搜索结果,或调用网络音乐播放器得到歌曲音频资源、或使用高德地图搜索、规划路径等到导航数据等等,并将这些网络数据发送至便携式智能语音交互控制装置;
语音播报阶段:便携式智能语音交互控制装置通过第二通讯模块12接收到云服务器8下发的网络数据后,使得本体处理器19根据该网络数据利用第一扬声器118进行相应的语音播报。
另外,参考图11,如实施例一所述的备用方式,该便携式智能语音交互系统数据交互的过程包括:
语音识别阶段:通过耳机27的拾音器拾取用户语音,拾取的用户语音经耳机处理器29进行模数转换,模数转换后得到的数字语音信号通过耳机通讯模块21发送至第一通讯模块11,第一通讯模块11接收到该数字语音信号后,本体处理器19响应该数字语音信号,通过第二通讯模块12向云服务器8发送登录访问请求和该数字语音信号,并登录相应云服务器8的接口,对该数字语音信号进行语音识别及语义分析;
数据调用阶段:分析完成后,云服务器8调用对应网络数据,比如进行百度搜索得到搜索结果,或调用网络音乐播放器得到歌曲音频资源、或使用高德地图搜索、规划路径等到导航数据等等,并将这些网络数据发送至便携式智能语音交互控制装置;
语音播报阶段:便携式智能语音交互控制装置通过第二通讯模块12接收到云服务器8下发的网络数据后,使得本体处理器19根据该网络数据利用第一扬声器118进行相应的语音播报。
相应地,本申请还提供一种便携式智能语音交互方法,包括:
耳机20从耳机槽112内取出时,耳机20的拾音器27拾取用户语音,拾取的用户语音通过耳机处理器29进行模数转换,模数转换后得到的数字语音信号通过耳机通讯模块21发送至第一通讯模块11;第一 通讯模块11接收到该数字语音信号后,本体处理器19响应该数字语音信号,通过第二通讯模块12向云服务器8发送登录访问请求和该数字语音信号,并登录相应云服务器的接口,对数字语音信号进行语音识别及语义分析;分析完成后,云服务器8调用对应网络数据,并将网络数据发送至第二通讯模块12,第二通讯模块12接收网络数据后,本体处理器19响应该网络数据,通过第二通讯模块12转发至耳机通讯模块21,耳机通讯模块21接收到该网络数据后,耳机处理器29响应该网络数据,通过第二扬声器28根据该网络数据进行相应的语音播报;
耳机8放置于耳机槽112内充电时,本体10的麦克风阵列17拾取用户语音,拾取的用户语音通过本体处理器19进行模数转换;模数转换后得到的数字语音信号通过第二通讯模块12向云服务器8发送登录访问请求和该数字语音信号,并登录相应云服务器的接口,对数字语音信号进行语音识别及语义分析;分析完成后,云服务器8调用对应网络数据,并将网络数据发送至第二通讯模块12,第二通讯模块12接收网络数据后,本体处理器19响应网络数据,通过第一扬声器118根据网络数据进行相应的语音播报;
开启备用方式后,耳机8放置于耳机槽112内充电,耳机20的拾音器27拾取用户语音,拾取的用户语音通过耳机处理器29进行模数转换,模数转换后得到的数字语音信号通过耳机通讯模块21发送至第一通讯模块11;第一通讯模块11接收到该数字语音信号后,本体处理器19响应该数字语音信号,通过第二通讯模块12向云服务器8发送登录访问请求和该数字语音信号,并登录相应云服务器的接口,对数字语音信号进行语音识别及语义分析;分析完成后,云服务器8调用对应网络数据,并将网络数据发送至第二通讯模块12,第二通讯模块12接收网络数据后,本体处理器19响应该网络数据,通过第一扬声器118根据网络数据进行相应的语音播报。
本领域技术人员可以理解,上述实施方式中各种方法的全部或部分功能可以通过硬件的方式实现,也可以通过计算机程序的方式实现。当上述实施方式中全部或部分功能通过计算机程序的方式实现时,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器、随机存储器、磁盘、光盘、硬盘等,通过计算机执行该程序以实现上述功能。例如,将程序存储在设备的存储器中,当通过处理器执行存储器中程序,即可实现上述全部或部分功能。另外,当上述实施方式中全部或部分功能通过计算机程序的方式实现时,该程序也可以存储在服务器、另一计算机、磁盘、光盘、闪存盘或移动硬盘等存储介质中,通过下载或复制保存到本地设备的存储器中,或对本地设备的系统进行版本更新,当通过处理器执行存储器中的程序时,即可实现上述实施方式中全部或 部分功能。
以上应用了具体个例对本发明进行阐述,只是用于帮助理解本发明,并不用以限制本发明。对于本发明所属技术领域的技术人员,依据本发明的思想,还可以做出若干简单推演、变形或替换。

Claims (9)

  1. 一种便携式智能语音交互控制设备,其特征在于,包括本体以及与所述本体可拆卸连接的耳机,所述本体包括本体外壳和旋转盖,所述旋转盖扣于本体外壳之上,本体外壳表面嵌入式设置有用于放置耳机的耳机槽,耳机槽内设置有电能输出端,耳机上设置有电能输入端,耳机可放置于所述耳机槽内,通过所述电能输出端和所述电能输入端实现与本体电连接进行充电;
    所述旋转盖上设置有取物孔,水平旋转所述旋转盖可使得耳机从所述取物孔露出便于耳机取出;所述耳机从耳机槽取出后,通过无线通讯与所述本体通讯连接;
    所述本体外壳包括:本体上壳、透声壳体和本体下壳,透声壳体设置于本体上壳与本体下壳之间,所述透声壳体上设置有网状透声孔;所述透声壳体与本体上壳连接处还设置有跑马灯圈,所述跑马灯圈外侧设置有透光装饰带。
  2. 如权利要求1所述的便携式智能语音交互控制设备,其特征在于,所述本体外壳内设置有麦克风阵列、第一通讯模块、第二通讯模块、第一扬声器和本体处理器,所述本体处理器分别与麦克风阵列、第一通讯模块、第二通讯模块、第一扬声器电连接,所述第二通讯模块包括移动数据上网模块、蜂窝收发器和WiFi收发器;所述耳机为入耳式,所述耳机包括设置于端部的入耳软胶和耳机外壳,耳机外壳内设置有耳机通讯模块、耳机处理器、第二扬声器和拾音器,耳机处理器分别与耳机通讯模块、第二扬声器和拾音器电连接,所述耳机通讯模块与第一通讯模块无线连接。
  3. 如权利要求2所述的便携式智能语音交互控制设备,其特征在于,所述耳机通讯模块与第一通讯模块可通过WiFi、蓝牙或红外无线连接方式实现无线连接;所述耳机包括:TWS耳机、经典立体声蓝牙耳机或经典单边蓝牙耳机。
  4. 如权利要求1所述的便携式智能语音交互控制设备,其特征在于,所述本体外壳内还设置有与所述本体处理器电连接的本体储能电路,所述本体储能电路还连接有充电模块、电能输出电路,所述电能输出电路还连接有充电模块、电能输出端,所述充电模块包括无线充电模块或USB接口充电模块;所述耳机外壳内还设置有与耳机处理器相连接 的耳机储能器电路,所述耳机储能电路还连接有电能输入端。
  5. 如权利要求1所述的便携式智能语音交互控制设备,其特征在于,所述本体还设置有与所述本体处理器相连接的本体触摸键和本体LED,所述耳机上还设置有与所述耳机处理器相连接的耳机触摸键和耳机LED。
  6. 如权利要求1所述的装置,其特征在于,所述第二通讯模块还包括:e-SIM卡模块。
  7. 一种便携式智能语音交互系统,其特征在于,包括:如权利要求2-6任一项所述的便携式智能语音交互控制设备以及云服务器,所述便携式智能语音交互控制设备与所述云服务器通讯连接。
  8. 一种便携式智能语音交互方法,其特征在于,包括:
    耳机从耳机槽内取出时,耳机的拾音器拾取用户语音,拾取的用户语音通过耳机处理器进行模数转换,模数转换后得到的数字语音信号通过耳机通讯模块发送至第一通讯模块;
    第一通讯模块接收到所述数字语音信号后,本体处理器响应所述数字语音信号,通过第二通讯模块向云服务器发送所述数字语音信号,并登录相应云服务器的接口,对数字语音信号进行语音识别及语义分析;
    分析完成后,云服务器调用对应网络数据,并将网络数据发送至第二通讯模块,第二通讯模块接收网络数据后,本体处理器响应网络数据,通过第二通讯模块转发至耳机通讯模块,耳机通讯模块接收到网络数据后,耳机处理器响应网络数据,通过第二扬声器根据网络数据进行相应的语音播报;或者
    耳机放置于耳机槽内充电时,本体的麦克风阵列拾取用户语音,拾取的用户语音通过本体处理器进行模数转换;
    模数转换后得到的数字语音信号通过第二通讯模块向云服务器发送所述数字语音信号,并登录相应云服务器的接口,对数字语音信号进行语音识别及语义分析;
    分析完成后,云服务器调用对应网络数据,并将网络数据发送至第二通讯模块,第二通讯模块接收网络数据后,本体处理器响应网络数据,通过第一扬声器根据网络数据进行相应的语音播报;或者
    耳机放置于耳机槽内充电时,耳机的拾音器拾取用户语音,拾取的用户语音通过耳机处理器进行模数转换,模数转换后得到的数字语音信 号通过耳机通讯模块发送至第一通讯模块;
    第一通讯模块接收到所述数字语音信号后,本体处理器响应所述数字语音信号,通过第二通讯模块向云服务器发送所述数字语音信号,并登录相应云服务器的接口,对数字语音信号进行语音识别及语义分析;
    分析完成后,云服务器调用对应网络数据,并将网络数据发送至第二通讯模块,第二通讯模块接收网络数据后,本体处理器响应网络数据,通过第一扬声器根据网络数据进行相应的语音播报。
  9. 一种计算机可读存储介质,其特征在于,包括程序,所述程序能够被处理器执行以实现如权利要求8所述的方法。
PCT/CN2018/087576 2018-05-18 2018-05-18 一种便携式智能语音交互控制设备、方法及系统 WO2019218369A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP18919359.2A EP3621068A4 (en) 2018-05-18 2018-05-18 PORTABLE INTELLIGENT VOICE INTERACTION CONTROL DEVICE, METHOD AND SYSTEM
PCT/CN2018/087576 WO2019218369A1 (zh) 2018-05-18 2018-05-18 一种便携式智能语音交互控制设备、方法及系统
CN201820960500.3U CN208507180U (zh) 2018-05-18 2018-06-21 一种便携式智能语音交互控制设备
CN201810643457.2A CN108550367A (zh) 2018-05-18 2018-06-21 一种便携式智能语音交互控制设备、方法及系统
US16/708,639 US10809964B2 (en) 2018-05-18 2019-12-10 Portable intelligent voice interactive control device, method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/087576 WO2019218369A1 (zh) 2018-05-18 2018-05-18 一种便携式智能语音交互控制设备、方法及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/708,639 Continuation US10809964B2 (en) 2018-05-18 2019-12-10 Portable intelligent voice interactive control device, method and system

Publications (1)

Publication Number Publication Date
WO2019218369A1 true WO2019218369A1 (zh) 2019-11-21

Family

ID=63492862

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/087576 WO2019218369A1 (zh) 2018-05-18 2018-05-18 一种便携式智能语音交互控制设备、方法及系统

Country Status (4)

Country Link
US (1) US10809964B2 (zh)
EP (1) EP3621068A4 (zh)
CN (2) CN108550367A (zh)
WO (1) WO2019218369A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111885183A (zh) * 2020-07-28 2020-11-03 合肥华凌股份有限公司 智能设备、家电设备、控制方法和计算机介质
CN112350408A (zh) * 2020-11-12 2021-02-09 万魔声学(湖南)科技有限公司 一种无线tws耳机设备用充电仓
US20220201383A1 (en) * 2019-09-11 2022-06-23 Goertek Inc. Wireless earphone noise reduction method and device, wireless earphone, and storage medium

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019218370A1 (zh) * 2018-05-18 2019-11-21 深圳傲智天下信息科技有限公司 一种ai语音交互方法、装置及系统
WO2019218369A1 (zh) 2018-05-18 2019-11-21 深圳傲智天下信息科技有限公司 一种便携式智能语音交互控制设备、方法及系统
CN109065050A (zh) * 2018-09-28 2018-12-21 上海与德科技有限公司 一种语音控制方法、装置、设备及存储介质
CN108900945A (zh) * 2018-09-29 2018-11-27 上海与德科技有限公司 蓝牙耳机盒和语音识别方法、服务器和存储介质
CN111128142A (zh) * 2018-10-31 2020-05-08 深圳市冠旭电子股份有限公司 一种智能音箱拨打电话的方法、装置及智能音箱
CN111276135B (zh) * 2018-12-03 2023-06-20 华为终端有限公司 网络语音识别方法、网络业务交互方法及智能耳机
CN110267139B (zh) * 2019-06-12 2024-04-12 郭军伟 一种智能人声识别噪音过滤耳机
USD916291S1 (en) * 2019-06-17 2021-04-13 Oxiwear, Inc. Earpiece
CN112114772A (zh) * 2019-06-20 2020-12-22 傲基科技股份有限公司 语音交互装置及其控制方法、设备和计算机存储介质
USD929322S1 (en) * 2019-08-09 2021-08-31 Shenzhen Grandsun Electronic Co., Ltd. Charging case for pair of earbuds
USD921582S1 (en) * 2019-10-25 2021-06-08 Shenzhen Eriwin Technology Limited. Charging box for wireless earphones
USD920236S1 (en) * 2020-01-08 2021-05-25 Xiaoqian Xie Headphone charging case
CN111246330A (zh) * 2020-01-09 2020-06-05 美特科技(苏州)有限公司 一种蓝牙耳机及其通信方法
USD941806S1 (en) * 2020-03-02 2022-01-25 Chunhong Liu Earphone
WO2021210923A1 (ko) * 2020-04-14 2021-10-21 삼성전자 주식회사 블루투스 통신을 위한 무선 통신 회로를 포함하는 전자 장치 및 그의 동작 방법
USD964927S1 (en) * 2020-05-13 2022-09-27 XueQing Deng Charging box for headset
USD1002582S1 (en) 2020-05-29 2023-10-24 Oxiwear, Inc. Earpiece and charger case
CN111741394A (zh) * 2020-06-05 2020-10-02 北京搜狗科技发展有限公司 一种数据处理方法、装置及可读介质
CN111739530A (zh) * 2020-06-05 2020-10-02 北京搜狗科技发展有限公司 一种交互方法、装置、耳机和耳机收纳装置
CN212936146U (zh) * 2020-07-07 2021-04-09 瑞声科技(新加坡)有限公司 耳机充电装置及蓝牙耳机
CN111968553A (zh) * 2020-07-27 2020-11-20 西安理工大学 一种基于互联网的大型旅游景区安全导游服务系统
CN112165143A (zh) * 2020-10-29 2021-01-01 歌尔科技有限公司 无线耳机充电方法、装置、设备及存储介质
CN112511944B (zh) * 2020-12-03 2022-09-20 歌尔科技有限公司 多功能耳机充电盒
CN112511945B (zh) * 2020-12-03 2022-11-22 歌尔科技有限公司 具有智能语音功能的无线耳机盒
USD983744S1 (en) * 2020-12-08 2023-04-18 Lg Electronics Inc. Combined cradle with charging pad for wireless earphones
CN112820286A (zh) * 2020-12-29 2021-05-18 北京搜狗科技发展有限公司 一种交互方法和耳机设备
EP4335013A1 (en) * 2021-05-08 2024-03-13 Harman International Industries, Incorporated Charging device for wearable device and wearable device assembly
CN113473312A (zh) * 2021-07-13 2021-10-01 深圳市深科信飞电子有限公司 音响控制系统及音响
TWI801941B (zh) * 2021-07-21 2023-05-11 國立中正大學 個人化語音轉換系統
CN113676808B (zh) * 2021-08-12 2023-09-12 广州番禺巨大汽车音响设备有限公司 基于蓝牙耳机与耳机盒交互的控制方法及控制装置
CN114257916A (zh) * 2022-01-13 2022-03-29 深圳市同力创科技有限公司 可提高收音里程的便携耳机
USD1028877S1 (en) 2022-03-04 2024-05-28 Google Llc Charging tray
CN115273431B (zh) * 2022-09-26 2023-03-07 荣耀终端有限公司 设备的寻回方法、装置、存储介质和电子设备

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616349A (zh) * 2008-06-27 2009-12-30 深圳富泰宏精密工业有限公司 蓝牙耳机及具该蓝牙耳机的便携式电子装置
US7877115B2 (en) * 2005-01-24 2011-01-25 Broadcom Corporation Battery management in a modular earpiece microphone combination
CN202524445U (zh) * 2012-04-28 2012-11-07 叶晓林 带有蓝牙耳机的手机
CN106488353A (zh) * 2016-08-26 2017-03-08 珠海格力电器股份有限公司 一种终端设备
CN206181335U (zh) * 2016-09-29 2017-05-17 深圳市战音科技有限公司 智能音箱
CN106878850A (zh) * 2017-03-13 2017-06-20 歌尔股份有限公司 利用无线耳机实现语音交互的方法、系统及无线耳机
CN106952647A (zh) * 2017-03-14 2017-07-14 上海斐讯数据通信技术有限公司 一种基于云管理的智能音箱及其使用方法
CN107333200A (zh) * 2017-07-24 2017-11-07 歌尔科技有限公司 一种翻译耳机收纳盒、无线翻译耳机和无线翻译系统
CN206639587U (zh) * 2017-03-03 2017-11-14 北京金锐德路科技有限公司 可穿戴的语音交互智能设备
CN207100728U (zh) * 2017-06-23 2018-03-16 深圳市阜昌技术有限公司 一种可搭载真无线蓝牙耳机的智能运动手环
CN108550367A (zh) * 2018-05-18 2018-09-18 深圳傲智天下信息科技有限公司 一种便携式智能语音交互控制设备、方法及系统

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7680267B2 (en) * 2005-07-01 2010-03-16 Plantronics, Inc. Headset with a retractable speaker portion
JP3119248U (ja) * 2005-12-06 2006-02-16 ▲シウ▼瑩企業有限公司 無線イヤホンデバイスおよび充電ベースのアセンブリー
US20110286615A1 (en) * 2010-05-18 2011-11-24 Robert Olodort Wireless stereo headsets and methods
CN202931433U (zh) * 2012-11-02 2013-05-08 艾尔肯·买合木提江 一种蓝牙耳机和蓝牙免提语音交互设备
US9748998B2 (en) * 2014-12-16 2017-08-29 Otter Products, Llc Electronic device case with peripheral storage
US10219062B2 (en) * 2015-06-05 2019-02-26 Apple Inc. Wireless audio output devices
EP3151582B1 (en) * 2015-09-30 2020-08-12 Apple Inc. Earbud case with charging system
US10085083B2 (en) * 2016-09-23 2018-09-25 Apple Inc. Wireless headset carrying case with digital audio output port
CN206061101U (zh) * 2016-09-29 2017-03-29 深圳市晟邦设计咨询有限公司 一种智能语音音响
CN106454587B (zh) * 2016-09-30 2023-04-21 歌尔科技有限公司 一种无线耳机的收纳盒
CN206533526U (zh) * 2016-12-02 2017-09-29 歌尔科技有限公司 一种手环
CN106878849A (zh) * 2017-01-22 2017-06-20 歌尔股份有限公司 无线耳机装置以及人工智能装置
CN107241689B (zh) * 2017-06-21 2020-05-05 深圳市冠旭电子股份有限公司 一种耳机语音交互方法及其装置、终端设备
CN107277272A (zh) * 2017-07-25 2017-10-20 深圳市芯中芯科技有限公司 一种基于软件app的蓝牙设备语音交互方法及系统

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7877115B2 (en) * 2005-01-24 2011-01-25 Broadcom Corporation Battery management in a modular earpiece microphone combination
CN101616349A (zh) * 2008-06-27 2009-12-30 深圳富泰宏精密工业有限公司 蓝牙耳机及具该蓝牙耳机的便携式电子装置
CN202524445U (zh) * 2012-04-28 2012-11-07 叶晓林 带有蓝牙耳机的手机
CN106488353A (zh) * 2016-08-26 2017-03-08 珠海格力电器股份有限公司 一种终端设备
CN206181335U (zh) * 2016-09-29 2017-05-17 深圳市战音科技有限公司 智能音箱
CN206639587U (zh) * 2017-03-03 2017-11-14 北京金锐德路科技有限公司 可穿戴的语音交互智能设备
CN106878850A (zh) * 2017-03-13 2017-06-20 歌尔股份有限公司 利用无线耳机实现语音交互的方法、系统及无线耳机
CN106952647A (zh) * 2017-03-14 2017-07-14 上海斐讯数据通信技术有限公司 一种基于云管理的智能音箱及其使用方法
CN207100728U (zh) * 2017-06-23 2018-03-16 深圳市阜昌技术有限公司 一种可搭载真无线蓝牙耳机的智能运动手环
CN107333200A (zh) * 2017-07-24 2017-11-07 歌尔科技有限公司 一种翻译耳机收纳盒、无线翻译耳机和无线翻译系统
CN108550367A (zh) * 2018-05-18 2018-09-18 深圳傲智天下信息科技有限公司 一种便携式智能语音交互控制设备、方法及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3621068A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220201383A1 (en) * 2019-09-11 2022-06-23 Goertek Inc. Wireless earphone noise reduction method and device, wireless earphone, and storage medium
US11812208B2 (en) * 2019-09-11 2023-11-07 Goertek Inc. Wireless earphone noise reduction method and device, wireless earphone, and storage medium
CN111885183A (zh) * 2020-07-28 2020-11-03 合肥华凌股份有限公司 智能设备、家电设备、控制方法和计算机介质
CN112350408A (zh) * 2020-11-12 2021-02-09 万魔声学(湖南)科技有限公司 一种无线tws耳机设备用充电仓

Also Published As

Publication number Publication date
CN208507180U (zh) 2019-02-15
CN108550367A (zh) 2018-09-18
EP3621068A4 (en) 2021-07-21
EP3621068A1 (en) 2020-03-11
US20200110569A1 (en) 2020-04-09
US10809964B2 (en) 2020-10-20

Similar Documents

Publication Publication Date Title
WO2019218369A1 (zh) 一种便携式智能语音交互控制设备、方法及系统
WO2019218368A1 (zh) 一种tws耳机、腕带式ai语音交互装置及系统
US11158318B2 (en) AI voice interaction method, device and system
WO2020010579A1 (zh) 一种带具有语音交互功能耳机的智能手表
CN113169760B (zh) 无线短距离音频共享方法及电子设备
US11490061B2 (en) Proximity-based control of media devices for media presentations
US20150171973A1 (en) Proximity-based and acoustic control of media devices for media presentations
US20150172878A1 (en) Acoustic environments and awareness user interfaces for media devices
CN109348334B (zh) 一种无线耳机及其环境监听方法和装置
CN113489830A (zh) 信息处理设备
CN111447600A (zh) 无线耳机的音频共享方法、终端设备及存储介质
WO2020019843A1 (zh) 麦克风堵孔检测方法及相关产品
CN113411726A (zh) 一种音频处理方法、装置及系统
CN104796816A (zh) 一种自动开关的wifi智能音箱
CN106792321B (zh) 一种分体式无线耳机及其通信方法
US20180270557A1 (en) Wireless hearing-aid circumaural headphone
WO2020042491A9 (zh) 一种耳机远场交互方法、耳机远场交互配件及无线耳机
CN112866855A (zh) 一种助听方法、系统、耳机充电盒及存储介质
CN109120297B (zh) 一种耳机远场交互方法、耳机远场交互配件及无线耳机
CN104811850A (zh) 一种基于wifi的智能音箱
CN104467902A (zh) 便携式音频传输装置及其音频传输方法
CN109120296A (zh) 一种耳机远场交互方法、耳机远场交互配件及无线耳机
CN218416600U (zh) 电子设备收纳盒及耳戴设备系统
CN107172522A (zh) 一种多功能蓝牙耳机
CN117917899A (zh) 一种音频业务处理方法、电子设备及计算机存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018919359

Country of ref document: EP

Effective date: 20191204

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18919359

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE