CN112532266A - Intelligent helmet and voice interaction control method of intelligent helmet - Google Patents

Intelligent helmet and voice interaction control method of intelligent helmet Download PDF

Info

Publication number
CN112532266A
CN112532266A CN202011431294.5A CN202011431294A CN112532266A CN 112532266 A CN112532266 A CN 112532266A CN 202011431294 A CN202011431294 A CN 202011431294A CN 112532266 A CN112532266 A CN 112532266A
Authority
CN
China
Prior art keywords
audio data
user
module
wake
helmet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011431294.5A
Other languages
Chinese (zh)
Inventor
唐世长
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AI Speech Ltd
Original Assignee
AI Speech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AI Speech Ltd filed Critical AI Speech Ltd
Priority to CN202011431294.5A priority Critical patent/CN112532266A/en
Publication of CN112532266A publication Critical patent/CN112532266A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/3827Portable transceivers
    • H04B1/385Transceivers carried on the body, e.g. in helmets
    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • A42B3/30Mounting radio sets or communication systems
    • A42B3/303Communication between riders or passengers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/3827Portable transceivers
    • H04B1/385Transceivers carried on the body, e.g. in helmets
    • H04B2001/3872Transceivers carried on the body, e.g. in helmets with extendable microphones or earphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/13Hearing devices using bone conduction transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's

Abstract

The invention discloses an intelligent helmet and a voice interaction control method thereof, wherein the intelligent helmet comprises the following components: an audio acquisition module configured to acquire user wake-up audio data; the awakening identification module is configured to detect whether the collected user awakening audio data meets a preset awakening condition; the control instruction detection module is configured to detect user control audio data and analyze a control instruction corresponding to the user control audio data when the user wake-up audio data meets the wake-up condition; the terminal control module is configured to send the control instruction to a mobile terminal so that the mobile terminal can execute an operation corresponding to the control instruction; and the bone conduction loudspeaker module is used for playing the feedback audio data received from the mobile terminal. Therefore, noise reduction and lossless transmission of the audio data can be guaranteed, both hands of a user can be liberated, and the method can be better applied to personalized scenes.

Description

Intelligent helmet and voice interaction control method of intelligent helmet
Technical Field
The invention belongs to the technical field of wearable equipment, and particularly relates to an intelligent helmet and a voice interaction control method of the intelligent helmet.
Background
With the rapid development of the internet technology, the living habits of people also change, the express industry is rapidly developed due to the rapid development of network business modes such as e-commerce and the like, and when goods are delivered by a take-out rider, the rider needs to wear a helmet and answer a call at any time. The traditional helmet and mobile phone mode is difficult to meet the practical requirements, and the helmet is inconvenient to answer the call if worn. In addition, if the mobile phone is used externally, the telephone communication process between the seller and the customer is disturbed by the surrounding noise in a noisy environment.
In view of the above problems, the industry has not provided a better solution for the moment.
Disclosure of Invention
The embodiment of the invention provides an intelligent helmet and a voice interaction control method of the intelligent helmet, which are used for solving at least one of the technical problems.
In a first aspect, an embodiment of the present invention provides an intelligent helmet, including: an audio acquisition module configured to acquire user wake-up audio data; the awakening identification module is configured to detect whether the collected user awakening audio data meets a preset awakening condition; the control instruction detection module is configured to detect user control audio data and analyze a control instruction corresponding to the user control audio data when the user wake-up audio data meets the wake-up condition; the terminal control module is configured to send the control instruction to a mobile terminal so that the mobile terminal can execute an operation corresponding to the control instruction; and the bone conduction loudspeaker module is used for playing the feedback audio data received from the mobile terminal.
In a second aspect, an embodiment of the present invention provides a voice interaction control method for an intelligent helmet, including: acquiring user audio data, wherein the user audio data is acquired based on an audio acquisition module in an intelligent helmet; detecting whether the collected user audio data meet a preset awakening condition; when the user audio data meet the awakening condition, detecting user control audio data and analyzing a control instruction corresponding to the user control audio data; sending the control instruction to a mobile terminal so that the mobile terminal can execute an operation corresponding to the control instruction; and playing feedback audio data received from the mobile terminal based on the bone conduction speaker module in the intelligent helmet.
In a third aspect, an embodiment of the present invention provides an electronic device, including: the computer-readable medium includes at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of the above-described method.
In a fourth aspect, an embodiment of the present invention provides a storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the above method.
The embodiment of the invention has the beneficial effects that:
the intelligent helmet collects user awakening audio data through the audio collection module, can detect user control audio data and analyze a control instruction corresponding to the user audio data when the user awakening audio data meets a preset awakening condition, performs corresponding control operation on the mobile terminal, and plays feedback audio data from the mobile terminal in a bone conduction mode. Therefore, the intelligent helmet has a voice interaction operation function, after a user wears the helmet, the user can awaken the interaction function of the helmet through voice, the voice controls the connected mobile terminal, the noise reduction and lossless transmission of audio data can be guaranteed based on the bone conduction technology, the hands of the user can be liberated, and the intelligent helmet is applicable to a personalized voice interaction scene (for example, a scene of calling a telephone by a takeout person).
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 shows a block diagram of an example of a smart helmet according to an embodiment of the invention;
fig. 2 shows a block diagram of an example of a smart helmet according to an embodiment of the invention;
FIG. 3 shows a noise reduction simulation diagram of an example of a smart helmet in accordance with an embodiment of the invention;
fig. 4 shows a flowchart of an example of a voice interaction control method of a smart helmet according to an embodiment of the present invention;
fig. 5 is a block diagram illustrating an example of a voice interaction control apparatus of an intelligent helmet according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
As used herein, a "module," "system," and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, or software in execution. In particular, for example, an element may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. Also, an application or script running on a server, or a server, may be an element. One or more elements may be in a process and/or thread of execution and an element may be localized on one computer and/or distributed between two or more computers and may be operated by various computer-readable media. The elements may also communicate by way of local and/or remote processes based on a signal having one or more data packets, e.g., from a data packet interacting with another element in a local system, distributed system, and/or across a network in the internet with other systems by way of the signal.
Finally, it should be further noted that the terms "comprises" and "comprising," when used herein, include not only those elements but also other elements not expressly listed or inherent to such processes, methods, articles, or devices. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Fig. 1 shows a block diagram of an example of an intelligent helmet according to an embodiment of the present invention.
As shown in fig. 1, the smart helmet 100 includes an audio acquisition module 110, a wake-up recognition module 120, a manipulation instruction detection module 130, a terminal manipulation module 140, and a bone conduction speaker module 150.
In particular, user wake-up audio data may be captured by the audio capture module 110, which may be a microphone, for example. Whether the collected user wake-up audio data meets a preset wake-up condition can be detected by the wake-up recognition module 120, for example, when the user audio data contains a wake-up keyword, or when the sound intensity of the user sound exceeds a set threshold, it can be determined that the user wake-up audio data meets the wake-up condition.
The control instruction detecting module 130 is configured to detect the user control audio data and analyze a control instruction corresponding to the user control audio data when the user awakens the audio data to meet the awakening condition. For example, the audio capture module may be activated to continue capturing user audio data as corresponding user manipulation audio data, and corresponding manipulation instructions may be determined through speech recognition.
The terminal manipulation module 140 is configured to send a manipulation instruction to the mobile terminal, so that the mobile terminal performs an operation corresponding to the manipulation instruction. Specifically, the manipulation instruction may be various common manipulation commands, such as dialing a phone + name or a phone + phone number, answering a phone, rejecting a phone call, playing, pausing/continuing to play music, weather query, and the like, and may also utilize an application program (e.g., APP) on the mobile terminal to support more interactive contents.
The bone conduction speaker module 150 is configured to play feedback audio data received from the mobile terminal. Specifically, in a take-away phone scenario, a take-away clerk may use the bone conduction speaker module 150 to answer the customer's phone. And lossless and clear call quality can be supported by a bone conduction audio transmission mode.
In the embodiment of the invention, the intelligent helmet is endowed with the voice intelligent control function aiming at the mobile terminal, so that the intelligent helmet can be better applied to personalized application scenes. For example, in the driving process of a rider, the external environment is noisy, the rider wears a helmet and cannot communicate with a customer through a mobile phone easily, and the problem can be solved better through the intelligent helmet according to the embodiment of the invention.
It should be noted that various types of audio acquisition modules, such as a common silicon microphone or a bone conduction microphone, may be disposed in the smart helmet. In addition, the bone conduction speaker module in the intelligent helmet can be set up in the intelligent helmet and press close to the settlement position of human skull to guarantee the quality that helmet wearer received the audio frequency.
In some examples of the embodiment of the present invention, the wake-up recognition module 120 may be configured to recognize a speaking sentence corresponding to the collected user wake-up audio data, detect whether the speaking sentence includes a preset wake-up keyword, and determine that the collected user wake-up audio data satisfies a wake-up condition when the speaking sentence includes the wake-up keyword.
In some embodiments, the wake recognition module is configured to recognize a speaking sentence corresponding to the collected user audio data based on a preset deep neural network.
In some examples of embodiments of the invention, the audio acquisition module comprises a microphone array arranged to enhance user audio data corresponding to the set sound source angular range and suppress user audio data outside the set sound source angular range. Specifically, the set sound source angle range may be angle information associated with a relative position between human mouths, different microphones in the microphone array have known relative position information, and a sound source angle corresponding to a sound source may be calculated by detecting a time difference between arrival of the sound source at the different microphones. In the embodiment of the invention, the performance of the audio data of the user can be optimized by carrying out noise reduction according to the angle of the sound source.
In particular, the microphone array includes a first microphone and a second microphone. Here, the first microphone and the second microphone are disposed on the same side of the smart helmet proximate to the person's face, for example, the first microphone and the second microphone may both be disposed on the left side or the right side near the person's face. In addition, the sound pickup directions of the first microphone and the second microphone are towards the position of the mouth of a person, so that the sound of a user can be fully and completely picked up.
It should be noted that, by picking up audio data through the technology, a better noise reduction effect can be achieved. In addition, in order to guarantee high performance of audio data, a module for further noise reduction can be arranged in the intelligent helmet.
In some examples of embodiments of the invention, the smart helmet may include one or more of the following noise reduction modules: the device comprises an echo cancellation module, a background noise suppression module and an audio dynamic amplification module, wherein the echo cancellation module is configured to cancel an echo audio component in the collected user audio data, the background noise suppression module is configured to suppress a background noise component in the collected user audio data, and the audio dynamic amplification module is configured to dynamically amplify the collected user audio data.
As a further optimization of the embodiment of the present invention, a talk mode detection module may be further disposed in the intelligent helmet, and configured to detect whether the intelligent helmet is in a talk mode, and when detecting that the intelligent helmet is in the talk mode, trigger the noise reduction module to perform a corresponding noise reduction operation. By the embodiment of the invention, corresponding noise reduction operation can be triggered and executed aiming at a specific call scene, so that the call quality can be optimized; in addition, noise reduction processing is not required to be carried out on the voices of all scenes, and processing resources of the intelligent helmet can be saved.
According to the embodiment of the invention, the intelligent helmet is provided with the sound transmission device for intelligently reducing noise, has the functions of awakening, identifying, voiceprint and voice interaction, and can liberate both hands of a user.
Fig. 2 shows a block diagram of an example of a smart helmet according to an embodiment of the present invention.
As shown in fig. 2, the smart helmet 200 is provided with a power module 210, a sound transmission device 220, and a CPU main control device 230.
Specifically, the power module 210 may be located inside the top of the helmet for powering the entire device and providing a charging interface. The sound transmission device 220 is used for receiving interactive voice, can adopt single-side double microphones, is positioned on the left side or the right side of the helmet, constructs a microphone array, can be positioned on the inner side of the helmet and attached to the skull, and can improve the communication quality through a built-in noise reduction algorithm. The CPU master control device 230 is used for processing and transmitting information data between different modules in the smart helmet, and supports a communication connection (e.g., bluetooth connection) with the mobile terminal.
Fig. 3 shows a noise reduction simulation diagram of an example of a smart helmet.
As shown in fig. 3, 4 audio channels are shown, the first 2 (top to bottom) audio channels are speech signals received by two microphones respectively and contain echo and background noise, the third audio channel is a reference sound generated by a loudspeaker, and the 4 th audio channel is audio processed by a noise reduction module. As can be seen in fig. 3, the processed audio substantially suppresses all background sounds and echoes.
In the embodiment of the invention, the characteristic of low computational power requirement of the traditional voice processing method and the compatibility of a deep neural network (which can adapt to various environmental noises) are fully utilized.
From this, newly-increased voice interaction system in intelligent helmet fully liberates both hands, reduces the driving safety hidden danger of wearer, uses the module of making an uproar of falling in the conversation simultaneously, and the automatic ambient noise that eliminates improves speech quality.
Fig. 4 is a flowchart illustrating an example of a voice interaction control method of a smart helmet according to an embodiment of the present invention. As for the execution subject of the embodiment of the present invention, it may be a processing device provided in the smart helmet, for example, a CPU.
As shown in fig. 4, in step 410, user audio data is acquired. Here, the user audio data is captured based on an audio capture module in the smart helmet.
In step 420, it is detected whether the collected user audio data meets a preset wake-up condition.
In step 430, when the user audio data meets the wake-up condition, the user manipulation audio data is detected and a manipulation instruction corresponding to the user manipulation audio data is analyzed.
In step 440, a manipulation instruction is sent to the mobile terminal, so that the mobile terminal performs an operation corresponding to the manipulation instruction.
In step 450, the feedback audio data received from the mobile terminal is played based on the bone conduction speaker module in the smart helmet.
As to implementation details of the step 420, specifically, a spoken sentence corresponding to the user audio data may be identified, and whether the spoken sentence includes a preset wake-up keyword is detected, so as to determine whether the user audio data meets the wake-up condition accordingly.
Illustratively, the smart helmet may be configured to have an interactive mode and a talk mode. In the interactive mode, the smart helmet is used to wake up the voice interaction with the wearer in order to understand the wearer's intention and to execute the corresponding command. Specifically, the intelligent helmet can receive voice input, perform voice recognition, and determine whether the word is a wakeup word, and if the word is a wakeup word, enter an interactive state. In the interactive state, the user voice can be continuously received, the command word corresponding to the received user voice is recognized, and the corresponding command is executed. In addition, if the command words corresponding to the voice of the user are not recognized, the intelligent helmet can be made to enter a sleep state so as to save the battery power.
Under the conversation mode of the intelligent module, when a wearer receives/initiates a voice conversation, the conversation quality can be improved, and the conversation comprises but is not limited to telephone conversation, WeChat voice conversation and the like. Specifically, in the call mode, the noise reduction module may be enabled to eliminate echo, suppress background noise, and dynamically amplify the processed audio.
Fig. 5 is a block diagram illustrating an example of a voice interaction control apparatus of an intelligent helmet according to an embodiment of the present invention.
As shown in fig. 5, the voice interaction control apparatus 500 of the smart helmet includes an audio obtaining unit 510, a wake-up condition detecting unit 520, a manipulation instruction parsing unit 530, a manipulation instruction sending unit 540, and an audio playing control unit 550.
An audio acquisition unit 510 configured to acquire user audio data, which is acquired based on an audio acquisition module in the smart helmet.
A wake-up condition detecting unit 520 configured to detect whether the collected user audio data satisfies a preset wake-up condition.
The control instruction parsing unit 530 is configured to detect user control audio data and parse a control instruction corresponding to the user control audio data when the user audio data meets the wake-up condition.
A manipulation instruction transmitting unit 540 configured to transmit the manipulation instruction to the mobile terminal so that the mobile terminal performs an operation corresponding to the manipulation instruction.
An audio playing control unit 550 configured to play feedback audio data received from the mobile terminal based on the bone conduction speaker module in the smart helmet.
The device of the embodiment of the present invention can be used for executing the corresponding method embodiment of the present invention, and accordingly achieves the technical effects achieved by the voice interaction control method embodiment of the intelligent helmet of the present invention, which are not described herein again.
In the embodiment of the present invention, the relevant functional module may be implemented by a hardware processor (hardware processor).
In another aspect, an embodiment of the present invention provides a storage medium, on which a computer program is stored, where the program is executed by a processor to perform the steps of the above voice interaction control method for an intelligent helmet.
The product can execute the method provided by the embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. Technical details that are not elaborated in the present embodiment may be referred to details described in the present invention with respect to the intelligent helmet embodiment.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the above technical solutions substantially or contributing to the related art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An intelligent helmet, comprising:
an audio acquisition module configured to acquire user wake-up audio data;
the awakening identification module is configured to detect whether the collected user awakening audio data meets a preset awakening condition;
the control instruction detection module is configured to detect user control audio data and analyze a control instruction corresponding to the user control audio data when the user wake-up audio data meets the wake-up condition;
the terminal control module is configured to send the control instruction to a mobile terminal so that the mobile terminal can execute an operation corresponding to the control instruction;
and the bone conduction loudspeaker module is used for playing the feedback audio data received from the mobile terminal.
2. The smart helmet of claim 1, wherein the audio collection module comprises an array of microphones configured to enhance user audio data corresponding to a set sound source angular range and suppress user audio data outside the set sound source angular range.
3. The smart helmet of claim 2, wherein the array of microphones comprises a first microphone and a second microphone, the first microphone and the second microphone being disposed on a same side of the smart helmet proximate to a human face, and a pickup direction of the first microphone and the second microphone being toward a human mouth position.
4. The smart helmet of claim 1, further comprising one or more noise reduction modules: an echo cancellation module, a background noise suppression module and an audio dynamic amplification module,
wherein the echo cancellation module is configured to cancel an echo audio component in the collected user audio data, the background noise suppression module is configured to suppress a background noise component in the collected user audio data, and the audio dynamic amplification module is configured to dynamically amplify the collected user audio data.
5. The smart helmet of claim 4, further comprising:
the communication mode detection module is configured to detect whether the intelligent helmet is in a communication mode or not, and when the intelligent helmet is detected to be in the communication mode, the noise reduction module is triggered to execute corresponding noise reduction operation.
6. The intelligent helmet according to claim 1, wherein the wake-up recognition module is configured to recognize a spoken sentence corresponding to the collected user wake-up audio data, detect whether the spoken sentence includes a preset wake-up keyword, and determine that the collected user wake-up audio data satisfies the wake-up condition when the spoken sentence includes the wake-up keyword.
7. The intelligent helmet according to claim 6, wherein the wake-up recognition module is configured to recognize a speaking sentence corresponding to the collected user audio data based on a preset deep neural network.
8. The smart helmet of claim 1, wherein the smart helmet is bluetooth connected to the mobile terminal.
9. A voice interaction control method of an intelligent helmet comprises the following steps:
acquiring user audio data, wherein the user audio data is acquired based on an audio acquisition module in an intelligent helmet;
detecting whether the collected user audio data meet a preset awakening condition;
when the user audio data meet the awakening condition, detecting user control audio data and analyzing a control instruction corresponding to the user control audio data;
sending the control instruction to a mobile terminal so that the mobile terminal can execute an operation corresponding to the control instruction;
and playing feedback audio data received from the mobile terminal based on the bone conduction speaker module in the intelligent helmet.
10. The method of claim 9, wherein the detecting whether the collected user audio data meets a preset wake condition comprises:
recognizing a speaking sentence corresponding to the user audio data;
and detecting whether the speaking sentence contains a preset awakening keyword or not so as to correspondingly determine whether the user audio data meets the awakening condition or not.
CN202011431294.5A 2020-12-07 2020-12-07 Intelligent helmet and voice interaction control method of intelligent helmet Pending CN112532266A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011431294.5A CN112532266A (en) 2020-12-07 2020-12-07 Intelligent helmet and voice interaction control method of intelligent helmet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011431294.5A CN112532266A (en) 2020-12-07 2020-12-07 Intelligent helmet and voice interaction control method of intelligent helmet

Publications (1)

Publication Number Publication Date
CN112532266A true CN112532266A (en) 2021-03-19

Family

ID=74998974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011431294.5A Pending CN112532266A (en) 2020-12-07 2020-12-07 Intelligent helmet and voice interaction control method of intelligent helmet

Country Status (1)

Country Link
CN (1) CN112532266A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112971260A (en) * 2021-03-25 2021-06-18 广州讯成科技有限公司 Intelligent helmet based on control of mobile phone APP
CN113100531A (en) * 2021-03-25 2021-07-13 广州讯成科技有限公司 Intelligent head protection system with AI voice interaction technology
CN113409788A (en) * 2021-07-15 2021-09-17 深圳市同行者科技有限公司 Voice wake-up method, system, device and storage medium
CN113593530A (en) * 2021-07-26 2021-11-02 国网安徽省电力有限公司建设分公司 Safety helmet system based on NLP technology and operation method
CN113628619A (en) * 2021-08-11 2021-11-09 雅迪科技集团有限公司 Intelligent helmet voice interaction system
CN113679139A (en) * 2021-09-26 2021-11-23 深圳市众鸿科技股份有限公司 Deep learning-based voice recognition system and method for intelligent helmet
CN113826981A (en) * 2021-08-31 2021-12-24 上海大学 Intelligent helmet control system and method for take-out personnel
CN113925250A (en) * 2021-11-12 2022-01-14 中国船舶重工集团公司第七一九研究所 Mountain forest fire-fighting intelligent helmet system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170309152A1 (en) * 2016-04-20 2017-10-26 Ulysses C. Dinkins Smart safety apparatus, system and method
CN107331397A (en) * 2017-06-15 2017-11-07 肇庆市亿尔声学电子科技有限公司 The method that a kind of hot-tempered crash helmet of intelligence drop and the helmet communicate with mobile phone wireless
CN110070863A (en) * 2019-03-11 2019-07-30 华为技术有限公司 A kind of sound control method and device
CN110797015A (en) * 2018-12-17 2020-02-14 北京嘀嘀无限科技发展有限公司 Voice wake-up method and device, electronic equipment and storage medium
CN111326156A (en) * 2020-04-16 2020-06-23 杭州趣慧科技有限公司 Intelligent helmet control method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170309152A1 (en) * 2016-04-20 2017-10-26 Ulysses C. Dinkins Smart safety apparatus, system and method
CN107331397A (en) * 2017-06-15 2017-11-07 肇庆市亿尔声学电子科技有限公司 The method that a kind of hot-tempered crash helmet of intelligence drop and the helmet communicate with mobile phone wireless
CN110797015A (en) * 2018-12-17 2020-02-14 北京嘀嘀无限科技发展有限公司 Voice wake-up method and device, electronic equipment and storage medium
CN110070863A (en) * 2019-03-11 2019-07-30 华为技术有限公司 A kind of sound control method and device
CN111326156A (en) * 2020-04-16 2020-06-23 杭州趣慧科技有限公司 Intelligent helmet control method and device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112971260A (en) * 2021-03-25 2021-06-18 广州讯成科技有限公司 Intelligent helmet based on control of mobile phone APP
CN113100531A (en) * 2021-03-25 2021-07-13 广州讯成科技有限公司 Intelligent head protection system with AI voice interaction technology
CN112971260B (en) * 2021-03-25 2022-06-07 广州讯成科技有限公司 Intelligent helmet based on control of mobile phone APP
CN113409788A (en) * 2021-07-15 2021-09-17 深圳市同行者科技有限公司 Voice wake-up method, system, device and storage medium
CN113593530A (en) * 2021-07-26 2021-11-02 国网安徽省电力有限公司建设分公司 Safety helmet system based on NLP technology and operation method
CN113628619A (en) * 2021-08-11 2021-11-09 雅迪科技集团有限公司 Intelligent helmet voice interaction system
CN113826981A (en) * 2021-08-31 2021-12-24 上海大学 Intelligent helmet control system and method for take-out personnel
CN113679139A (en) * 2021-09-26 2021-11-23 深圳市众鸿科技股份有限公司 Deep learning-based voice recognition system and method for intelligent helmet
CN113925250A (en) * 2021-11-12 2022-01-14 中国船舶重工集团公司第七一九研究所 Mountain forest fire-fighting intelligent helmet system and method

Similar Documents

Publication Publication Date Title
CN112532266A (en) Intelligent helmet and voice interaction control method of intelligent helmet
US10410634B2 (en) Ear-borne audio device conversation recording and compressed data transmission
CN105323648B (en) Caption concealment method and electronic device
CN108710615B (en) Translation method and related equipment
CN108681440A (en) A kind of smart machine method for controlling volume and system
WO2021114953A1 (en) Voice signal acquisition method and apparatus, electronic device, and storage medium
KR102565882B1 (en) the Sound Outputting Device including a plurality of microphones and the Method for processing sound signal using the plurality of microphones
CN107919138B (en) Emotion processing method in voice and mobile terminal
CN109360549B (en) Data processing method, wearable device and device for data processing
EP4191579A1 (en) Electronic device and speech recognition method therefor, and medium
CN109215683B (en) Prompting method and terminal
CN110364156A (en) Voice interactive method, system, terminal and readable storage medium storing program for executing
CN110070863A (en) A kind of sound control method and device
CN110246513B (en) Voice signal processing method and mobile terminal
CN115482830B (en) Voice enhancement method and related equipment
CN109412544B (en) Voice acquisition method and device of intelligent wearable device and related components
US20220239269A1 (en) Electronic device controlled based on sound data and method for controlling electronic device based on sound data
CN111933167B (en) Noise reduction method and device of electronic equipment, storage medium and electronic equipment
US20240013789A1 (en) Voice control method and apparatus
CN113744750A (en) Audio processing method and electronic equipment
CN117480554A (en) Voice enhancement method and related equipment
WO2019228329A1 (en) Personal hearing device, external sound processing device, and related computer program product
US20230239800A1 (en) Voice Wake-Up Method, Electronic Device, Wearable Device, and System
CN113766385B (en) Earphone noise reduction method and device
CN111182416B (en) Processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 215123 14 Tengfei Innovation Park, 388 Xinping street, Suzhou Industrial Park, Suzhou, Jiangsu.

Applicant after: Sipic Technology Co.,Ltd.

Address before: 215123 14 Tengfei Innovation Park, 388 Xinping street, Suzhou Industrial Park, Suzhou, Jiangsu.

Applicant before: AI SPEECH Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210319