WO2020211806A1 - 一种用于提供救援语音提示的方法与设备 - Google Patents

一种用于提供救援语音提示的方法与设备 Download PDF

Info

Publication number
WO2020211806A1
WO2020211806A1 PCT/CN2020/085049 CN2020085049W WO2020211806A1 WO 2020211806 A1 WO2020211806 A1 WO 2020211806A1 CN 2020085049 W CN2020085049 W CN 2020085049W WO 2020211806 A1 WO2020211806 A1 WO 2020211806A1
Authority
WO
WIPO (PCT)
Prior art keywords
rescue
voice
information
prompt
wearable
Prior art date
Application number
PCT/CN2020/085049
Other languages
English (en)
French (fr)
Inventor
陆乐
尤晓彤
俸安琪
丘富铨
田中秀治
喜熨斗智也
Original Assignee
上海救要救信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海救要救信息科技有限公司 filed Critical 上海救要救信息科技有限公司
Publication of WO2020211806A1 publication Critical patent/WO2020211806A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/90Services for handling of emergency or hazardous situations, e.g. earthquake and tsunami warning systems [ETWS]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2242/00Special services or facilities
    • H04M2242/04Special services or facilities for emergency applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2242/00Special services or facilities
    • H04M2242/12Language recognition, selection or translation arrangements

Definitions

  • This application relates to the field of rescue, and in particular to a technology for providing rescue voice prompts.
  • An object of this application is to provide a method for providing rescue voice prompts.
  • a method for providing rescue voice prompts on a wearable rescue device includes the following steps:
  • a method for providing rescue voice prompts on a network device side includes the following steps:
  • the subsequent voice prompt information is used for the wearable rescue device to provide the rescued object.
  • a wearable rescue device for providing rescue voice prompts, the wearable rescue device including:
  • the first module is used to collect the voice information of the rescued object at the rescue site if the prompt termination condition is not met, and send the voice information to the corresponding network device;
  • the first and second modules are configured to receive subsequent voice prompt information returned by the network device based on the voice information
  • the first and third modules are used to provide the subsequent voice prompt information to the rescued object
  • the first four modules are used to determine that the prompt termination condition is met when at least one preset prompt termination event occurs.
  • a network device for providing rescue voice prompts including:
  • the second module is used to receive the voice information of the rescued object at the rescue scene, where the voice information is collected and sent by the corresponding wearable rescue device;
  • the second second module is used to perform a voice recognition operation on the voice information, and determine corresponding subsequent voice prompt information based on the corresponding voice recognition result;
  • the second and third modules are used for the wearable rescue device to return the subsequent voice prompt information
  • the subsequent voice prompt information is used for the wearable rescue device to provide the rescued object.
  • a method for providing rescue voice prompts includes the following steps:
  • the wearable rescue device collects the voice information of the rescued object at the rescue site, and sends the voice information to the corresponding network device;
  • the network device receives the voice information of the rescued object at the rescue site, performs a voice recognition operation on the voice information, determines the corresponding subsequent voice prompt information based on the corresponding voice recognition result, and returns the said wearable rescue device Follow-up voice prompt information;
  • the wearable rescue device receives subsequent voice prompt information returned by the network device based on the voice information, and provides the subsequent voice prompt information to the rescued object;
  • the wearable rescue device determines that the prompt termination condition is satisfied.
  • a device for providing rescue voice prompts wherein the device includes:
  • a memory arranged to store computer-executable instructions that, when executed, cause the processor to perform the operations of any of the above methods.
  • a computer-readable medium storing instructions that, when executed, cause a system to perform operations of any of the above methods.
  • the wearable rescue device worn by the rescuer in this application provides rescue prompts that match the language of the rescued person, and can provide further prompts or inquiries based on the rescued person’s feedback. Achieving continuous dialogue to appease the rescued can also help rescuers provide more targeted rescue to improve rescue efficiency.
  • Fig. 1 shows a system structure for providing rescue voice prompts according to an embodiment of the present application
  • Fig. 2 shows the flow of a method for providing rescue voice prompts according to another embodiment of the present application
  • FIG. 3 shows the flow of a method for providing rescue voice prompts at the end of a wearable rescue device according to an embodiment of the present application
  • FIG. 4 shows the flow of a method for providing voice prompts for rescue at the network device side according to another embodiment of the present application
  • Fig. 5 shows functional modules of a wearable rescue device according to an embodiment of the present application
  • Fig. 6 shows functional modules of a network device according to another embodiment of the present application.
  • Fig. 7 shows functional modules of an exemplary system that can be used in various embodiments of the present application.
  • the terminal, the equipment of the service network, and the trusted party all include one or more processors (for example, a central processing unit (CPU)), input/output interfaces, network interfaces, and RAM.
  • processors for example, a central processing unit (CPU)
  • Memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read only memory (ROM) or flash memory (Read Only Memory). Flash Memory). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read only memory
  • Flash Memory Flash Memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PCM), programmable random access memory (Programmable Random Access Memory, PRAM), and static random access memory (Static Random-Access Memory, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), other types of random access memory (Random Access Memory, RAM), read-only memory (Read-Only Memory, ROM), electrically erasable and programmable Read-only memory (Electrically-Erasable Programmable Read-Only Memory, EEPROM), Flash Memory (Flash Memory) or other memory technologies, Compact Disc Read-Only Memory (CD-ROM), Digital Multi Digital Versatile Disc (DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices or any other non-transmission
  • the equipment referred to in this application includes but is not limited to user equipment, network equipment, or equipment formed by the integration of user equipment and network equipment through a network.
  • the user equipment includes, but is not limited to, any mobile electronic product that can perform human-computer interaction with the user (for example, human-computer interaction through a touchpad), such as a smart phone, a tablet computer, etc., and the mobile electronic product can adopt any operation System, such as Android operating system, iOS operating system, etc.
  • the network device includes an electronic device that can automatically perform numerical calculation and information processing in accordance with pre-set or stored instructions, and its hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC) ), programmable logic devices (Programmable Logic Device, PLD), Field Programmable Gate Array (Field Programmable Gate Array, FPGA), Digital Signal Processor (Digital Signal Processor, DSP), embedded devices, etc.
  • ASIC Application Specific Integrated Circuit
  • PLD programmable logic devices
  • Field Programmable Gate Array Field Programmable Gate Array
  • FPGA Field Programmable Gate Array
  • DSP Digital Signal Processor
  • the network device includes, but is not limited to, a computer, a network host, a single network server, a set of multiple network servers, or a cloud composed of multiple servers; here, the cloud is composed of a large number of computers or network servers based on Cloud Computing, Among them, cloud computing is a type of distributed computing, a virtual supercomputer composed of a group of loosely coupled computer sets.
  • the network includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and a wireless ad hoc network (Ad Hoc Network).
  • the device may also be a program running on the user equipment, network equipment, or user equipment and network equipment, network equipment, touch terminal, or a device formed by integrating network equipment and touch terminal through a network.
  • the wearable rescue device referred to in this application is a hardware device, including but not limited to smart helmets, smart glasses or other hardware devices, which can be directly worn on the user's body or integrated into the user's clothes or other accessories.
  • it has the ability to communicate with other devices (for example, including but not limited to other user devices of the same user, or user devices of other users, cloud servers and other network devices, etc.).
  • the other hardware devices described above include, but are not limited to, user devices held by rescuers, or fixed to the upper arms, torso, or other positions of rescuers through straps, brackets, or the like.
  • the user equipment includes but Not limited to mobile phones, tablets, PDAs, etc.
  • a method for providing rescue voice prompts is provided.
  • the method is implemented based on the system shown in FIG. 1, wherein the wearable rescue equipment is worn by the corresponding rescuer. It is connected to a network device through wired/wireless communication, and can be used to provide voice prompt information to rescued persons.
  • the wearable rescue device has a speaker for playing audio. Referring to Figure 2, the method includes the following steps:
  • the wearable rescue device collects the voice information of the rescued object at the rescue site, and sends the voice information to the corresponding network device;
  • the network device receives the voice information of the rescued object at the rescue site, performs a voice recognition operation on the voice information, determines the corresponding subsequent voice prompt information based on the corresponding voice recognition result, and returns the said wearable rescue device Follow-up voice prompt information;
  • the wearable rescue device receives subsequent voice prompt information returned by the network device based on the voice information, and provides the subsequent voice prompt information to the rescued object;
  • the wearable rescue device determines that the prompt termination condition is satisfied.
  • a method for providing rescue voice prompts at the end of a wearable rescue device includes step S110, step S120, step S130, and step S140.
  • step S110 if the wearable rescue device prompts that the termination condition is not met, collect the voice information of the rescued object at the rescue scene (for example, collect the voice information based on the built-in microphone of the wearable rescue device), and report The corresponding network device sends the voice information.
  • the wearable rescue device if the prompt termination condition is satisfied, the wearable rescue device will stop collecting voice information (or ambient environmental sounds) of the rescued object, and stop providing continuous voice prompts to the rescued object.
  • the wearable rescue device will terminate the voice prompts provided to the rescued object, so as not to cause interference to the rescued object or related rescuers in the further rescue process.
  • the wearable rescue device receives subsequent voice prompt information returned by the network device based on the voice information.
  • the subsequent voice prompt information is manually issued in real time or manually recorded in advance, while in other embodiments, the subsequent voice prompt information is synthesized by a computing device (such as a network device such as a server) .
  • the subsequent voice prompt information is determined based on the information (for example, provided by voice input) provided by the rescued person (or the rescuer providing rescue for him) to the network device, such as the subsequent voice prompt
  • the information is determined after the rescued person or rescuer provides feedback based on the previous voice prompt, and the system can determine and provide the next subsequent voice prompt information based on the feedback provided by the rescued person or rescuer on the subsequent voice prompt information, until The above prompt termination conditions are met.
  • the wearable rescue device provides the subsequent voice prompt information to the rescued object.
  • the wearable rescue device in some embodiments uses the output unit (such as a speaker unit) of the wearable rescue device. Output the subsequent voice prompt information.
  • step S140 when at least one preset prompt termination event occurs, the wearable rescue device determines that the prompt termination condition is satisfied, so as to end the continuous provision of voice prompts to the rescued person.
  • the prompt termination event is used to terminate continuous voice prompts.
  • the prompt termination event includes, but is not limited to, a rescue user wearing a wearable rescue device actively terminates the voice prompt, or the system detects that the corresponding termination condition has been met.
  • the above method is implemented based on a decision tree.
  • each node of the decision tree corresponds to a voice prompt information
  • the system determines the corresponding node based on the feedback information of the rescued person (or rescuer).
  • Branch to determine the subsequent node of the node, and output the voice prompt information corresponding to the subsequent node.
  • the above method further includes step S150 (not shown) before step S110.
  • step S150 the wearable rescue device provides initial voice prompt information to the rescued object at the rescue site.
  • the initial voice prompt information is used to comfort the rescued person after the rescuer arrives at the rescue site, or to obtain the initial input of the system (for example, asking about the rescued person's condition) by "questioning", so as to improve the rescued person's voice The matching degree between the input and the input required by the system, thereby improving rescue efficiency.
  • the above method further includes step S160 (not shown) and step S170 (not shown).
  • step S160 the wearable rescue device receives the rescue prompt information sent by the network device.
  • the system can also provide additional rescue prompts for volunteers to guide them in accordance with the standard process Judging and executing rescue operations, so as to improve the rescue efficiency of rescuers.
  • rescuers who are not experienced can also perform correct operations under the prompt of the system; and in step S170, the wearable rescue device sends the rescuer to the wearable rescuer.
  • the rescue user of the device provides the rescue prompt information.
  • the rescue prompt information is also provided to the corresponding rescuer in the form of voice output in some embodiments.
  • the wearable rescue device provides the subsequent voice prompt information to the rescued object through the first audio output unit of the wearable rescue device; and in the above step S170 , The wearable rescue device provides the rescue prompt information to the rescue user of the wearable rescue device through the second audio output unit of the wearable rescue device.
  • the first audio output unit and the second audio output unit are independent of each other, so as to provide the rescued person and the rescuer with the information they need and avoid the mutual interference of the prompt information, thereby further improving the rescue efficiency.
  • the first audio output unit is a loudspeaker (or referred to as "external amplifier")
  • the second audio output unit is an earplug.
  • the above method further includes step S180 (not shown).
  • step S180 the wearable rescue device collects supplementary voice information of the rescue user based on the information supplement instruction of the rescue user of the wearable rescue device, and sends the supplementary voice information to the corresponding network device.
  • the information supplement instruction is generated by the rescuer by pressing the relevant button or issuing the corresponding voice instruction, which is used for the rescuer to supplement the description of the on-site situation in time, so that the input information obtained by the network device is more accurate, and then Improve the accuracy of voice prompt information and rescue efficiency.
  • step S120 the wearable rescue device receives subsequent voice prompt information returned by the network device based on the voice information and the supplementary voice information.
  • step S110 in order to reduce unnecessary interference and collect useful information from the rescued person as much as possible to improve rescue efficiency, in step S110, if the prompt termination condition is not met, the wearable rescue device responds to the The voice collection instruction of the rescue user wearing the rescue device collects the voice information of the rescued object at the rescue scene, and sends the voice information to the corresponding network device.
  • the voice collection instruction is issued by the rescuer by pressing a physical button or issuing a voice instruction, and is executed by a wearable rescue device.
  • the above method further includes step S190 (not shown).
  • step S190 the wearable rescue device obtains the identification information of the rescued object at the rescue site, and sends the identification information to the corresponding network device, where the identification information is used for the corresponding network device to identify the Identity information of the rescued object; then in step S120, the wearable rescue device receives subsequent voice prompt information returned by the network device based on the voice information and the identity recognition information.
  • the identity recognition information is a photo used for facial recognition, voice used for voiceprint recognition, fingerprint/blood sample/iris information used for identity recognition (collected by an external device), etc., used to identify the identity of the rescued object
  • the system can further determine the rescue assistance information of the rescued object after determining the identity information of the rescued object based on these identification information, including but not limited to the language information of the rescued object (used to provide information based on the language Prompt information in the corresponding language or provide translation), allergy information (used to determine the allergy status of the rescued object and avoid introducing allergens), emergency contact information (used to contact the corresponding emergency contact to provide necessary assistance), etc., To ensure the safety of rescued objects and improve rescue efficiency.
  • the aforementioned prompt termination event includes:
  • the wearable rescue device receives the prompt termination instruction sent by the network device, for example, when the emergency situation faced by the rescued object has been reasonably handled or the specific treatment method has been finalized, the network device generates and sends The wearable rescue device sends the prompt termination instruction; and/or
  • the wearable rescue device detects the prompt termination operation of the rescue user, for example, the rescuer presses the prompt termination button on the wearable rescue device (such as a smart helmet) to terminate the provision of voice prompts.
  • the voice prompt is terminated after a more advanced rescue unit arrives at the scene and takes over.
  • the wearable rescue device can submit an incident report from the beginning of the emergency to the current moment.
  • the incident report can be transmitted to by means including but not limited to Bluetooth, near field communication, email, etc. Computing equipment corresponding to higher-level rescue units.
  • the wearable rescue device determines that the prompt termination condition is satisfied, and submits a rescue activity report to the corresponding higher-level rescue unit , wherein the rescue activity report includes historical voice information of the rescued object and historical prompt information of the network device.
  • the wearable rescue device submits the information collected during this period and the information provided to the rescued object.
  • the aforementioned rescue activity report can be automatically generated by the system and submitted to a higher-level rescue unit through other operations (for example, the user presses a physical button, gives a voice command, etc.).
  • the higher-level rescue unit in order to reduce the transmission pressure of the above-mentioned wearable rescue device, or by sending an incident report on the way to the scene by the above-mentioned higher-level rescue unit, the higher-level rescue unit can grasp the situation of the scene in advance to improve the rescue.
  • the wearable rescue device sends a rescue report submission request to the network device, and the rescue report submission request is used for the network device to send a report to the corresponding Higher-level rescue units send reports on rescue activities.
  • a method for providing rescue voice prompts on the network device side is also provided.
  • the method includes step S210, step S220, and step S230.
  • the network device receives the voice information of the rescued object at the rescue site, where the voice information is collected and sent by the corresponding wearable rescue device, for example, the wearable rescue device collects the voice information based on its built-in microphone. voice message.
  • step S220 the network device performs a voice recognition operation on the voice information, and determines corresponding subsequent voice prompt information based on the corresponding voice recognition result.
  • the subsequent voice prompt information is used for the wearable rescue device to provide the rescued object.
  • the network device first provides a voice prompt message to the wearable rescue device for playback by the wearable rescue device, and performs voice recognition based on the voice feedback provided by the rescued object to determine the content of the rescued object’s feedback , And then determine the last piece of voice prompt information based on the feedback content, and provide the last piece of voice prompt information to the wearable rescue device for playback.
  • step S230 the network device returns the subsequent voice prompt information to the wearable rescue device.
  • the network device determines the next voice prompt information of a piece of voice prompt information based on a decision tree, so as to provide continuous voice prompts for the rescued person to comfort the rescued person, and can gradually and accurately determine the person being rescued.
  • the ultimate disposal method needed by rescuers For example, each node of the decision tree corresponds to a piece of voice prompt information, and the subsequent nodes (if any) of each node are determined based on the information fed back by the rescued object.
  • the above-mentioned step S220 includes sub-step S221 (not shown), sub-step S222 (not shown), and sub-step S223 (not shown).
  • the network device performs a voice recognition operation on the voice information to determine the corresponding voice recognition result; in sub-step S222, the network device applies the voice recognition result to the current node of the rescue decision tree, To determine the corresponding subsequent node of the rescue decision tree; in sub-step S223, the network device determines the corresponding subsequent voice prompt information based on the subsequent node.
  • the network device determines the corresponding subsequent voice prompt information and rescue prompt information based on the subsequent node, wherein the rescue prompt information is used for the wearable rescue device to provide To the rescue user of the wearable rescue device.
  • the system can also provide additional rescue prompts for volunteers to guide them to perform rescue operations, thereby improving the rescue efficiency of rescuers. For example, rescuers with less experience can also use the system Perform the correct operation at the prompt.
  • the rescuers’ supplementary description of the scene is used as the input of the system, which can further improve the accuracy of the output provided by the system.
  • rescuers can supplement the description of the scene situation in time, so that the network equipment can obtain The input information is more accurate, thereby improving the accuracy of the voice prompt information and the rescue efficiency.
  • the above method further includes step S240 (not shown).
  • step S240 the network device receives supplementary voice information of the rescue user sent by the wearable rescue device, where the supplementary voice information is collected by the wearable rescue device; the above step S220 also includes sub-step S224 ( Not shown), in this sub-step S224, the network device performs a voice recognition operation on the supplementary voice information to determine the corresponding supplementary voice recognition result. Subsequently, in sub-step S222, the network device applies the speech recognition result and the supplementary speech recognition result to the current node of the rescue decision tree to determine the corresponding subsequent node of the rescue decision tree. For example, the network device merges the speech recognition result and the keywords in the supplementary speech recognition result, and applies the merged result (for example, the union of the keyword set) as an input to the current node of the aforementioned rescue decision tree.
  • the network device merges the speech recognition result and the keywords in the supplementary speech recognition result, and applies the merged result (for example, the union of the keyword set) as an input to the current node of the aforementioned rescue decision
  • the above method further includes step S250 (not shown).
  • step S250 if the subsequent node is a leaf node of the rescue decision tree, the network device sends a prompt termination instruction to the wearable rescue device, and the prompt termination instruction is used for the wearable rescue device to terminate the provision of Rescue voice prompt.
  • the aforementioned "leaf" node may include an edit item, which is an edit item for the termination condition of whether to terminate the provision of rescue prompts when the high-level rescue unit arrives at the scene.
  • the above method further includes step S260 (not shown).
  • step S260 the network device receives the identification information sent by the corresponding wearable rescue device, and identifies the identity information of the rescued object at the rescue site based on the identification information.
  • step S220 the network device performs a voice recognition operation on the voice information based on the identity information, and determines the corresponding subsequent voice prompt information based on the corresponding voice recognition result and the identity information, for example, after recognizing the rescued After confirming the identity of the person, the language information can be provided for subsequent voice prompts in the corresponding language.
  • the identity identification information is used for the corresponding network device to identify the identity information of the rescued object.
  • the identity recognition information is a photo used for facial recognition, voice used for voiceprint recognition, fingerprint/blood sample information used for identity recognition (collected by an external device), etc., used to identify the identity information of the rescued object
  • the system can further determine the rescue assistance information of the rescued object, including but not limited to the language information of the rescued object (used to provide the corresponding language based on the language information) Information or provide translation), allergy information (used to determine the allergy status of the rescued object and avoid introducing allergens), emergency contact information (used to contact the corresponding emergency contact to provide necessary assistance), etc., to ensure The safety of rescued objects and the improvement of rescue efficiency.
  • the voice prompt is terminated after a more advanced rescue unit arrives at the scene and takes over.
  • the wearable rescue device can submit an incident report from the occurrence of the emergency to the current moment, and the time report can be transmitted to by means including but not limited to Bluetooth, near field communication, email, etc. Computing equipment corresponding to higher-level rescue units.
  • the higher-level rescue unit can grasp the situation of the scene in advance to improve the rescue efficiency, the above-mentioned incident report can be
  • the network equipment is sent to a higher-level rescue unit.
  • the above method further includes step S270 (not shown).
  • the network device sends a rescue activity report to a corresponding higher-level rescue unit based on the rescue report submission request sent by the wearable rescue device, wherein the rescue activity report includes the historical voice information of the rescued object And the historical prompt information of the network device.
  • a wearable rescue device 100 for providing rescue voice prompts is provided.
  • the wearable rescue device 100 includes a first first module 110, a first second module 120, a first third module 130 and a first fourth module 140.
  • the first module 110 prompts that the termination condition is not met, it collects the voice information of the rescued object at the rescue site (for example, collects the voice information based on the microphone built in the wearable rescue device), and sends it to the corresponding network device Send the voice information.
  • the wearable rescue device will stop collecting voice information (or ambient environmental sounds) of the rescued object, and stop providing continuous voice prompts to the rescued object.
  • the wearable rescue device will terminate the voice prompts provided to the rescued object, so as not to cause interference to the rescued object or related rescuers in the further rescue process.
  • the first second module 120 receives subsequent voice prompt information returned by the network device based on the voice information.
  • the subsequent voice prompt information is manually issued in real time or manually recorded in advance, while in other embodiments, the subsequent voice prompt information is synthesized by a computing device (such as a network device such as a server) .
  • the subsequent voice prompt information is determined based on the information (for example, provided by voice input) provided by the rescued person (or the rescuer providing rescue for him) to the network device, such as the subsequent voice prompt
  • the information is determined after the rescued person or rescuer provides feedback based on the previous voice prompt, and the system can determine and provide the next subsequent voice prompt information based on the feedback provided by the rescued person or rescuer on the subsequent voice prompt information, until The above prompt termination conditions are met.
  • the first third module 130 provides the subsequent voice prompt information to the rescued object.
  • the wearable rescue device outputs the subsequent voice prompt information through the output unit (such as a speaker unit) of the wearable rescue device in some embodiments. Voice prompt information.
  • the first four module 140 determines that the prompt termination condition is met when at least one preset prompt termination event occurs, so as to end the continuous provision of voice prompts to the rescued person.
  • the prompt termination event is used to terminate continuous voice prompts.
  • the prompt termination event includes, but is not limited to, a rescue user wearing a wearable rescue device actively terminates the voice prompt, or the system detects that the corresponding termination condition has been met.
  • the above-mentioned system is implemented based on a decision tree.
  • each node of the decision tree corresponds to a voice prompt information
  • the system determines the corresponding node based on the feedback information of the rescued person (or rescuer).
  • Branch to determine the subsequent node of the node, and output the voice prompt information corresponding to the subsequent node.
  • the above-mentioned first one module 110 previously further includes a first five module 150 (not shown).
  • the fifth module 150 provides initial voice prompt information to the rescued object at the rescue site.
  • the initial voice prompt information is used to comfort the rescued person after the rescuer arrives at the rescue site, or to obtain the initial input of the system (for example, asking about the rescued person's condition) by "questioning", so as to improve the rescued person's voice
  • the matching degree between the input and the input required by the system thereby improving rescue efficiency.
  • the aforementioned wearable rescue device further includes the first sixth module 160 (not shown) and the first seventh module 170 (not shown).
  • the 16th module 160 receives the rescue prompt information sent by the network device.
  • the system can also provide additional rescue prompts for volunteers to guide them to judge and perform rescue according to standard procedures. Operation, thereby improving the rescue efficiency of rescuers, for example, rescuers with less experience can perform correct operations under the prompts of the system; the first seven module 170 provides the rescue prompts to rescue users of the wearable rescue equipment information.
  • the rescue prompt information is also provided to the corresponding rescuer in the form of voice output in some embodiments.
  • the first third module 130 provides the subsequent voice prompt information to the rescued object through the first audio output unit of the wearable rescue device; and the first seventh module 170 uses the The second audio output unit of the wearable rescue device provides the rescue prompt information to the rescue user of the wearable rescue device.
  • the first audio output unit and the second audio output unit are independent of each other, so as to provide the rescued person and the rescuer with the information they need and avoid the mutual interference of the prompt information, thereby further improving the rescue efficiency.
  • the first audio output unit is a loudspeaker (or referred to as "external amplifier")
  • the second audio output unit is an earplug.
  • the aforementioned wearable rescue device further includes a first eight module 180 (not shown).
  • the eighth module 180 collects supplementary voice information of the rescue user based on the information supplement instruction of the rescue user of the wearable rescue device, and sends the supplementary voice information to the corresponding network device.
  • the information supplement instruction is generated by the rescuer by pressing the relevant button or issuing the corresponding voice instruction, which is used for the rescuer to supplement the description of the on-site situation in time, so that the input information obtained by the network device is more accurate, and then Improve the accuracy of voice prompt information and rescue efficiency.
  • the first and second module 120 receives subsequent voice prompt information returned by the network device based on the voice information and the supplementary voice information.
  • the first module 110 responds to the wearable rescue device
  • the voice collection instruction of the rescue user collects the voice information of the rescued object at the rescue scene, and sends the voice information to the corresponding network device.
  • the voice collection instruction is issued by the rescuer by pressing a physical button or issuing a voice instruction, and is executed by a wearable rescue device.
  • the aforementioned wearable rescue device further includes a first ninth module 190 (not shown).
  • the ninth module 190 obtains the identification information of the rescued object at the rescue site, and sends the identification information to the corresponding network device, where the identification information is used for the corresponding network device to identify the rescued object. Identity information; then the first and second module 120 receives subsequent voice prompt information returned by the network device based on the voice information and the identity recognition information.
  • the identity recognition information is a photo used for facial recognition, voice used for voiceprint recognition, fingerprint/blood sample/iris information used for identity recognition (collected by an external device), etc., used to identify the identity of the rescued object
  • the system can further determine the rescue assistance information of the rescued object after determining the identity information of the rescued object based on these identification information, including but not limited to the language information of the rescued object (used to provide information based on the language Prompt information in the corresponding language or provide translation), allergy information (used to determine the allergy status of the rescued object and avoid introducing allergens), emergency contact information (used to contact the corresponding emergency contact to provide necessary assistance), etc., To ensure the safety of rescued objects and improve rescue efficiency.
  • the aforementioned prompt termination event includes:
  • the wearable rescue device receives the prompt termination instruction sent by the network device, for example, when the emergency situation faced by the rescued object has been reasonably handled or the specific treatment method has been finalized, the network device generates and sends The wearable rescue device sends the prompt termination instruction; and/or
  • the wearable rescue device detects the prompt termination operation of the rescue user, for example, the rescuer presses the prompt termination button on the wearable rescue device (such as a smart helmet) to terminate the provision of voice prompts.
  • the voice prompt is terminated after a more advanced rescue unit arrives at the scene and takes over.
  • the wearable rescue device can submit an incident report from the beginning of the emergency to the current moment.
  • the incident report can be transmitted to by means including but not limited to Bluetooth, near field communication, email, etc. Computing equipment corresponding to higher-level rescue units.
  • the wearable rescue device detects the prompt termination operation of the rescue user, the first fourth module 140 determines that the prompt termination condition is satisfied, and submits a rescue activity report to the corresponding higher-level rescue unit.
  • the rescue activity report includes historical voice information of the rescued object and historical prompt information of the network device.
  • the wearable rescue device submits the information collected during this period and the information provided to the rescued object.
  • the aforementioned rescue activity report can be automatically generated by the system and submitted to a higher-level rescue unit through other operations (for example, the user presses a physical button, gives a voice command, etc.).
  • the higher-level rescue unit in order to reduce the transmission pressure of the above-mentioned wearable rescue device, or by sending an incident report on the way to the scene by the above-mentioned higher-level rescue unit, the higher-level rescue unit can grasp the situation of the scene in advance to improve the rescue.
  • the wearable rescue device sends a rescue report submission request to the network device, and the rescue report submission request is used for the network device to send a report to the corresponding Higher-level rescue units send reports on rescue activities.
  • a network device 200 for providing rescue voice prompts is also provided.
  • the network device 200 includes a second first module 210, a second second module 220 and a second third module 230.
  • the second module 210 receives the voice information of the rescued object at the rescue site, where the voice information is collected and sent by the corresponding wearable rescue device, for example, the wearable rescue device collects the voice information based on its built-in microphone .
  • the second second module 220 performs a voice recognition operation on the voice information, and determines corresponding subsequent voice prompt information based on the corresponding voice recognition result.
  • the subsequent voice prompt information is used for the wearable rescue device to provide the rescued object.
  • the network device first provides a voice prompt message to the wearable rescue device for playback by the wearable rescue device, and performs voice recognition based on the voice feedback provided by the rescued object to determine the content of the rescued object’s feedback , And then determine the last piece of voice prompt information based on the feedback content, and provide the last piece of voice prompt information to the wearable rescue device for playback.
  • the second and third module 230 returns the subsequent voice prompt information to the wearable rescue device.
  • the network device determines the next voice prompt information of a piece of voice prompt information based on a decision tree, so as to provide continuous voice prompts for the rescued person to comfort the rescued person, and can gradually and accurately determine the person being rescued.
  • the ultimate disposal method needed by rescuers For example, each node of the decision tree corresponds to a piece of voice prompt information, and the subsequent nodes (if any) of each node are determined based on the information fed back by the rescued object.
  • the above-mentioned second second module 220 includes a second first submodule 221 (not shown), a second second submodule 222 (not shown), and a second third submodule 223 (not shown) ).
  • the second sub-module 221 performs a voice recognition operation on the voice information to determine the corresponding voice recognition result; the second second sub-module 222 applies the voice recognition result to the current node of the rescue decision tree to determine the The corresponding subsequent node of the rescue decision tree; the second and third sub-module 223 determines the corresponding subsequent voice prompt information based on the subsequent node.
  • the second and third sub-modules 223 determine corresponding subsequent voice prompt information and rescue prompt information based on the subsequent node, wherein the rescue prompt information is used for the wearable rescue device to provide the Describes rescue users of wearable rescue equipment.
  • the system can also provide additional rescue prompts for volunteers to guide them to perform rescue operations, thereby improving the rescue efficiency of rescuers. For example, rescuers with less experience can also use the system Perform the correct operation at the prompt.
  • the rescuers’ supplementary description of the scene is used as the input of the system, which can further improve the accuracy of the output provided by the system.
  • rescuers can supplement the description of the scene situation in time, so that the network equipment can obtain The input information is more accurate, thereby improving the accuracy of the voice prompt information and the rescue efficiency.
  • the aforementioned network device further includes a second fourth module 240 (not shown).
  • the second fourth module 240 receives the supplementary voice information of the rescue user sent by the wearable rescue device, where the supplementary voice information is collected by the wearable rescue device; the second second module 220 also includes the second fourth A sub-module 224 (not shown), the second fourth sub-module 224 performs a voice recognition operation on the supplementary voice information to determine the corresponding supplementary voice recognition result. Then the second second sub-module 222 applies the speech recognition result and the supplementary speech recognition result to the current node of the rescue decision tree to determine the corresponding subsequent node of the rescue decision tree. For example, the network device merges the speech recognition result and the keywords in the supplementary speech recognition result, and applies the merged result (for example, the union of the keyword set) as an input to the current node of the aforementioned rescue decision tree.
  • the network device merges the speech recognition result and the keywords in the supplementary speech recognition result, and applies the merged result (for example, the union of the keyword set) as an input to the current node of the aforementioned rescue decision tree
  • the aforementioned network device further includes a second fifth module 250 (not shown). If the subsequent node is a leaf node of the rescue decision tree, the second fifth module 250 sends a prompt termination instruction to the wearable rescue device, and the prompt termination instruction is used for the wearable rescue device to terminate the provision of rescue voice prompt.
  • the aforementioned "leaf" node may include an edit item, which is an edit item for the termination condition of whether to terminate the provision of rescue prompts when the high-level rescue unit arrives at the scene.
  • the aforementioned network device further includes a second sixth module 260 (not shown).
  • the second-sixth module 260 receives the identification information sent by the corresponding wearable rescue device, and identifies the identity information of the rescued object at the rescue site based on the identification information. Then the second and second module 220 performs a voice recognition operation on the voice information based on the identity information, and determines the corresponding subsequent voice prompt information based on the corresponding voice recognition result and the identity information, for example, when the rescued person is recognized After the identity, the language information is determined for subsequent voice prompts in the corresponding language.
  • the identity identification information is used for the corresponding network device to identify the identity information of the rescued object.
  • the identity recognition information is a photo used for facial recognition, voice used for voiceprint recognition, fingerprint/blood sample information used for identity recognition (collected by an external device), etc., used to identify the identity information of the rescued object
  • the system can further determine the rescue assistance information of the rescued object, including but not limited to the language information of the rescued object (used to provide the corresponding language based on the language information) Information or provide translation), allergy information (used to determine the allergy status of the rescued object and avoid introducing allergens), emergency contact information (used to contact the corresponding emergency contact to provide necessary assistance), etc., to ensure The safety of rescued objects and the improvement of rescue efficiency.
  • the voice prompt is terminated after a more advanced rescue unit arrives at the scene and takes over.
  • the wearable rescue device can submit an incident report from the occurrence of the emergency to the current moment, and the time report can be transmitted to by means including but not limited to Bluetooth, near field communication, email, etc. Computing equipment corresponding to higher-level rescue units.
  • the higher-level rescue unit can grasp the situation of the scene in advance to improve the rescue efficiency, the above-mentioned incident report can be
  • the network equipment is sent to a higher-level rescue unit.
  • the aforementioned network device further includes a second seventh module 270 (not shown).
  • the second-seventh module 270 sends a rescue activity report to the corresponding higher-level rescue unit based on the rescue report submission request sent by the wearable rescue device, where the rescue activity report includes the historical voice information of the rescued object and all Describes the historical prompt information of the network equipment.
  • the present application also provides a computer-readable storage medium that stores computer code, and when the computer code is executed, the method described in any of the preceding items is executed.
  • the present application also provides a computer program product.
  • the computer program product is executed by a computer device, the method described in any of the preceding items is executed.
  • This application also provides a computer device, which includes:
  • One or more processors are One or more processors;
  • Memory used to store one or more computer programs
  • the one or more processors When the one or more computer programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any one of the preceding items.
  • Figure 7 shows an exemplary system that can be used to implement the various embodiments described in this application.
  • the system 1000 can be used as any wearable rescue device or network device in each of the described embodiments.
  • the system 1000 may include one or more computer-readable media with instructions (for example, system memory or NVM/storage device 1020) and be coupled with the one or more computer-readable media and configured to execute
  • the instructions are one or more processors (for example, the processor(s) 1005) that implement modules to perform the actions described in this application.
  • system control module 1010 may include any suitable interface controller to provide at least one of the processor(s) 1005 and/or any suitable device or component in communication with the system control module 1010 Any appropriate interface.
  • the system control module 1010 may include a memory controller module 1030 to provide an interface to the system memory 1015.
  • the memory controller module 1030 may be a hardware module, a software module, and/or a firmware module.
  • the system memory 1015 may be used to load and store data and/or instructions for the system 1000, for example.
  • the system memory 1015 may include any suitable volatile memory, such as a suitable DRAM.
  • the system memory 1015 may include a double data rate type quad synchronous dynamic random access memory (DDR4 SDRAM).
  • DDR4 SDRAM double data rate type quad synchronous dynamic random access memory
  • system control module 1010 may include one or more input/output (I/O) controllers to provide interfaces to the NVM/storage device 1020 and the communication interface(s) 1025.
  • I/O input/output
  • NVM/storage device 1020 may be used to store data and/or instructions.
  • NVM/storage device 1020 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more hard drives ( Hard Disk, HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives).
  • suitable non-volatile memory e.g., flash memory
  • suitable non-volatile storage device(s) e.g., one or more hard drives ( Hard Disk, HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives).
  • the NVM/storage device 1020 may include storage resources that are physically part of the device on which the system 1000 is installed, or it may be accessed by the device without necessarily being part of the device.
  • the NVM/storage device 1020 may be accessed via the communication interface(s) 1025 through the network.
  • the communication interface(s) 1025 may provide an interface for the system 1000 to communicate through one or more networks and/or with any other suitable devices.
  • the system 1000 can wirelessly communicate with one or more components of the wireless network according to any of one or more wireless network standards and/or protocols.
  • At least one of the processor(s) 1005 may be packaged with the logic of one or more controllers of the system control module 1010 (eg, the memory controller module 1030). For one embodiment, at least one of the processor(s) 1005 may be packaged with the logic of one or more controllers of the system control module 1010 to form a system in package (SiP). For one embodiment, at least one of the processor(s) 1005 may be integrated with the logic of one or more controllers of the system control module 1010 on the same mold. For one embodiment, at least one of the processor(s) 1005 may be integrated with the logic of one or more controllers of the system control module 1010 on the same mold to form a system on chip (SoC).
  • SoC system on chip
  • the system 1000 may be, but is not limited to, a server, a workstation, a desktop computing device, or a mobile computing device (for example, a laptop computing device, a handheld computing device, a tablet computer, a netbook, etc.).
  • the system 1000 may have more or fewer components and/or different architectures.
  • the system 1000 includes one or more cameras, keyboards, liquid crystal display (LCD) screens (including touchscreen displays), non-volatile memory ports, multiple antennas, graphics chips, application specific integrated circuits ( ASIC) and speakers.
  • LCD liquid crystal display
  • ASIC application specific integrated circuits
  • this application can be implemented in software and/or a combination of software and hardware, for example, it can be implemented by an application specific integrated circuit (ASIC), a general purpose computer or any other similar hardware device.
  • the software program of the present application may be executed by a processor to realize the steps or functions described above.
  • the software program (including related data structure) of the present application can be stored in a computer-readable recording medium, such as RAM memory, magnetic or optical drive or floppy disk and similar devices.
  • some steps or functions of the present application may be implemented by hardware, for example, as a circuit that cooperates with a processor to execute each step or function.
  • the computer program instructions in the computer-readable medium include but are not limited to source files, executable files, installation package files, etc.
  • the manner in which computer program instructions are executed by the computer includes but not Limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction before executing the corresponding post-installation program.
  • the computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
  • Communication media includes media by which communication signals containing, for example, computer-readable instructions, data structures, program modules, or other data are transmitted from one system to another system.
  • Communication media can include conductive transmission media (such as cables and wires (for example, optical fiber, coaxial, etc.)) and wireless (unguided transmission) media that can propagate energy waves, such as sound, electromagnetic, RF, microwave, and infrared .
  • Computer readable instructions, data structures, program modules or other data may be embodied as, for example, a modulated data signal in a wireless medium such as a carrier wave or similar mechanism such as embodied as part of spread spectrum technology.
  • modulated data signal refers to a signal whose one or more characteristics have been altered or set in such a way as to encode information in the signal. Modulation can be analog, digital or hybrid modulation techniques.
  • the computer-readable storage medium may include volatile, nonvolatile, and nonvolatile, nonvolatile, and nonvolatile, or Removable and non-removable media.
  • computer-readable storage media include, but are not limited to, volatile memory, such as random access memory (RAM, DRAM, SRAM); and non-volatile memory, such as flash memory, various read-only memories (ROM, PROM, EPROM) , EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other currently known media or future developments that can be stored for computer systems Computer readable information/data used.
  • volatile memory such as random access memory (RAM, DRAM, SRAM
  • non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM) , EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM)
  • MRAM magnetic and ferromagnetic/
  • an embodiment according to the present application includes a device including a memory for storing computer program instructions and a processor for executing the program instructions, wherein, when the computer program instructions are executed by the processor, trigger
  • the operation of the device is based on the aforementioned methods and/or technical solutions according to multiple embodiments of the present application.

Abstract

本申请的目的是提供一种用于提供救援语音提示的方法,若提示终止条件未被满足,可穿戴救援设备采集救援现场的被救援对象的语音信息,并向对应的网络设备发送所述语音信息;所述网络设备对所述语音信息执行语音识别操作,基于对应的语音识别结果确定相应的后续语音提示信息,并向所述可穿戴救援设备返回所述后续语音提示信息;所述可穿戴救援设备向所述被救援对象提供所述后续语音提示信息;当至少一个预设的提示终止事件发生时,所述可穿戴救援设备判定所述提示终止条件被满足。本申请不仅能实现连续的对话以安抚被救援者,也能协助救援者提供更有针对性的救援以提升救援效率。

Description

一种用于提供救援语音提示的方法与设备 技术领域
本申请涉及救援领域,尤其涉及一种用于提供救援语音提示的技术。
背景技术
有时,在一些公共场合有可能发生人员需要紧急救援的情况,例如在马拉松等国际体育赛事的现场可能有运动员会发生紧急情况而需要工作人员提供快速有效的帮助。随着人们生活水平的不断提高,国际化的集会也越来越多,从而救援人员可能需要为来自不同国家或地区、讲不同语言的人群提供紧急救援,相应地救援人员需要具备更丰富的救援经验或者经过更严格的训练方能胜任救援工作。
发明内容
本申请的一个目的是提供一种用于提供救援语音提示的方法。
根据本申请的一个方面,提供了一种在可穿戴救援设备端用于提供救援语音提示的方法,所述方法包括以下步骤:
若提示终止条件未被满足,采集救援现场的被救援对象的语音信息,并向对应的网络设备发送所述语音信息;
接收所述网络设备基于所述语音信息所返回的后续语音提示信息;
向所述被救援对象提供所述后续语音提示信息;
当至少一个预设的提示终止事件发生时,判定所述提示终止条件被满足。
根据本申请的另一方面,提供了一种在网络设备端用于提供救援语音提示的方法,所述方法包括以下步骤:
接收救援现场的被救援对象的语音信息,其中所述语音信息是由对应的可穿戴救援设备采集并发送的;
对所述语音信息执行语音识别操作,并基于对应的语音识别结果确定相应的后续语音提示信息;
向所述可穿戴救援设备返回所述后续语音提示信息;
其中所述后续语音提示信息用于供所述可穿戴救援设备提供至所述被救援对 象。
根据本申请的一个方面,提供了一种用于提供救援语音提示的可穿戴救援设备,所述可穿戴救援设备包括:
第一一模块,用于若提示终止条件未被满足,采集救援现场的被救援对象的语音信息,并向对应的网络设备发送所述语音信息;
第一二模块,用于接收所述网络设备基于所述语音信息所返回的后续语音提示信息;
第一三模块,用于向所述被救援对象提供所述后续语音提示信息;
第一四模块,用于当至少一个预设的提示终止事件发生时,判定所述提示终止条件被满足。
根据本申请的另一方面,提供了一种用于提供救援语音提示的网络设备,所述网络设备包括:
第二一模块,用于接收救援现场的被救援对象的语音信息,其中所述语音信息是由对应的可穿戴救援设备采集并发送的;
第二二模块,用于对所述语音信息执行语音识别操作,并基于对应的语音识别结果确定相应的后续语音提示信息;
第二三模块,用于所述可穿戴救援设备返回所述后续语音提示信息;
其中所述后续语音提示信息用于供所述可穿戴救援设备提供至所述被救援对象。
根据本申请的一个方面,提供了一种用于提供救援语音提示的方法,所述方法包括以下步骤:
若提示终止条件未被满足,可穿戴救援设备采集救援现场的被救援对象的语音信息,并向对应的网络设备发送所述语音信息;
所述网络设备接收救援现场的被救援对象的语音信息,对所述语音信息执行语音识别操作,基于对应的语音识别结果确定相应的后续语音提示信息,并向所述可穿戴救援设备返回所述后续语音提示信息;
所述可穿戴救援设备接收所述网络设备基于所述语音信息所返回的后续语音提示信息,并向所述被救援对象提供所述后续语音提示信息;
当至少一个预设的提示终止事件发生时,所述可穿戴救援设备判定所述提示终止条件被满足。
根据本申请的一个方面,提供了一种用于提供救援语音提示的设备,其中,该设备包括:
处理器;以及
被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行以上任一项所述方法的操作。
根据本申请的另一方面,提供了一种存储指令的计算机可读介质,所述指令在被执行时使得系统执行以上任一项所述方法的操作。
与现有技术相比,本申请通过救援者所佩戴的可穿戴救援设备为被救援者提供与其语言相匹配的救援提示,并能基于被救援者的反馈提供进一步的提示或询问内容,不仅能实现连续的对话以安抚被救援者,也能协助救援者提供更有针对性的救援以提升救援效率。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1示出根据本申请一实施例的用于提供救援语音提示的系统结构;
图2示出根据本申请另一实施例的用于提供救援语音提示的方法流程;
图3示出根据本申请一实施例的在可穿戴救援设备端用于提供救援语音提示的方法流程;
图4示出根据本申请另一实施例的在网络设备端用于提供救援语音提示的方法流程;
图5示出根据本申请一实施例的可穿戴救援设备的功能模块;
图6示出根据本申请另一实施例的网络设备的功能模块;
图7示出可用于本申请各实施例的一种示例性系统的功能模块。
附图中相同或相似的附图标记代表相同或相似的部件。
具体实施方式
下面结合附图对本申请作进一步详细描述。
在本申请一个典型的配置中,终端、服务网络的设备和可信方均包括一个或多个处理器(例如,中央处理器(Central Processing Unit,CPU))、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(Random Access Memory,RAM)和/或非易失性内存等形式,如只读存储器(Read Only Memory,ROM)或闪存(Flash Memory)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(Phase-Change Memory,PCM)、可编程随机存取存储器(Programmable Random Access Memory,PRAM)、静态随机存取存储器(Static Random-Access Memory,SRAM)、动态随机存取存储器(Dynamic Random Access Memory,DRAM)、其他类型的随机存取存储器(Random Access Memory,RAM)、只读存储器(Read-Only Memory,ROM)、电可擦除可编程只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、快闪记忆体(Flash Memory)或其他内存技术、只读光盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能光盘(Digital Versatile Disc,DVD)或其他光学存储、磁盒式磁带,磁带磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
本申请所指设备包括但不限于用户设备、网络设备、或用户设备与网络设备通过网络相集成所构成的设备。所述用户设备包括但不限于任何一种可与用户进行人机交互(例如通过触摸板进行人机交互)的移动电子产品,例如智能手机、平板电脑等,所述移动电子产品可以采用任意操作系统,如Android操作系统、iOS操作系统等。其中,所述网络设备包括一种能够按照事先设定或存储的指令,自动进行数值计算和信息处理的电子设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑器件(Programmable Logic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、数字信号处理器(Digital Signal Processor,DSP)、嵌入式设备等。所述网络设备包括但不限于计算机、网络主机、单个网络服务器、多个网络服务器集或多个服务器构成的云;在此,云由基于云计算(Cloud Computing)的大量计算机或网络服务器构成,其中,云计算是分布式计算的一种,由一群松散耦合的计算机集组成的一个虚拟超级计算机。所述网络包括但不限于互联网、广域网、城域网、局域网、VPN网络、无线自组织网络(Ad Hoc Network)等。优选地,所述设备还可以是运行于所述用户设备、网络设备、或用户设备与网络设备、网络设备、触摸终端或网络设备与触摸终端通过网络相集成所构成的设备上的程序。
当然,本领域技术人员应能理解上述设备仅为举例,其他现有的或今后可能出现的设备如可适用于本申请,也应包含在本申请保护范围以内,并在此以引用方式包含于此。
在本申请的描述中,“多个”的含义是两个或者更多,除非另有明确具体的限定。
本申请中所指的可穿戴救援设备为一种硬件设备,可为包括但不限于智能头 盔、智能眼镜或其他硬件设备,其可直接穿戴于用户身上或整合于用户的衣服或其他配件中,并可选地具备与其他设备(例如,包括但不限于同一用户的其他用户设备,或其他用户的用户设备、云端服务器等网络设备,等)进行通信的能力。在一些实施例中,以上所述的其他硬件设备,包括但不限于救援人员手持的、或者通过绑带、支架等固定于救援人员的上臂、躯干或其他位置的用户设备,该用户设备包括但不限于手机、平板电脑、PDA等。事实上,本申请的各实施例基于上述可穿戴救援设备实施,但本申请并不限于此。本领域技术人员应能理解,具有通信模块并能够与其他设备(例如云端服务器等网络设备)通信的智能电子通信设备均可用于实施本申请,在此以引用的方式包含于此。
根据本申请的一个方面,提供了一种用于提供救援语音提示的方法,在一些实施例中该方法是基于图1所示出的系统实施的,其中可穿戴救援设备由相应的救援人员佩戴并通过有线/无线通信连接于网络设备通信,并可用于向被救援人员提供语音提示信息,例如在一些实施例中该可穿戴救援设备具有用于播放音频的扬声器。参考图2,该方法包括以下步骤:
若提示终止条件未被满足,可穿戴救援设备采集救援现场的被救援对象的语音信息,并向对应的网络设备发送所述语音信息;
所述网络设备接收救援现场的被救援对象的语音信息,对所述语音信息执行语音识别操作,基于对应的语音识别结果确定相应的后续语音提示信息,并向所述可穿戴救援设备返回所述后续语音提示信息;
所述可穿戴救援设备接收所述网络设备基于所述语音信息所返回的后续语音提示信息,并向所述被救援对象提供所述后续语音提示信息;
当至少一个预设的提示终止事件发生时,所述可穿戴救援设备判定所述提示终止条件被满足。
以下分别从可穿戴救援设备与网络设备两方面阐述上述方法的具体实施方式。
根据本申请的一个方面,提供了一种在可穿戴救援设备端用于提供救援语音提示的方法。参考图3,该方法包括步骤S110、步骤S120、步骤S130和步骤S140。
其中,在步骤S110中,可穿戴救援设备若提示终止条件未被满足,采集救援现场的被救援对象的语音信息(例如,基于所述可穿戴救援设备内置的麦克风采集该语音信息),并向对应的网络设备发送所述语音信息。在一些实施例中,若所述提示终止条件被满足,所述可穿戴救援设备将停止采集被救援对象的语音信息(或周围的环境音),并停止向被救援对象提供连续的语音提示。例如,在被救援对象所面临的紧急状况已经得到合理处置或者具体处置方式已经最终确定的情况下,或 者在更高级的救援人员(例如具有更强的救援能力的医疗小组)已到现场的情况下,可穿戴救援设备将终止向被救援对象提供的语音提示,以免在进一步的救援过程中对被救援对象或者相关的救援人员造成干扰。
在步骤S120中,可穿戴救援设备接收所述网络设备基于所述语音信息所返回的后续语音提示信息。在一些实施例中,该后续语音提示信息是由人工实时发出的或由人工事先录制的,而在另一些实施例中该后续语音提示信息则是由计算设备(例如服务器等网络设备)合成的。其中,在一些实施例中该后续语音提示信息是基于被救援者(或者为其提供救援的救援人员)提供至网络设备的信息(例如通过语音输入的方式提供)确定的,例如该后续语音提示信息在被救援者或者救援人员基于前一语音提示提供反馈之后确定的,并且系统可基于被救援者或者救援人员对该后续语音提示信息所提供的反馈确定并提供下一后续语音提示信息,直至上述提示终止条件被满足。
在步骤S130中,可穿戴救援设备向所述被救援对象提供所述后续语音提示信息,例如所述可穿戴救援设备在一些实施例中通过所述可穿戴救援设备的输出单元(例如扬声器单元)输出所述后续语音提示信息。
在步骤S140中,可穿戴救援设备当至少一个预设的提示终止事件发生时,判定所述提示终止条件被满足,以结束连续提供至被救援者的语音提示。其中,所述提示终止事件用于终止连续的语音提示,例如该提示终止事件包括但不限于佩戴可穿戴救援设备的救援用户主动终止语音提示,或者系统检测到相应的终止条件已被满足。
在一些实施例中,上述方法是基于决策树实现的,例如该决策树的每个节点均对应于一语音提示信息,且系统基于被救援者(或者救援人员)的反馈信息确定该节点对应的分支,从而确定该节点的后续节点,并输出后续节点对应的语音提示信息。
在一些实施例中,上述方法在步骤S110之前还包括步骤S150(未示出)。在步骤S150中,可穿戴救援设备向救援现场的被救援对象提供初始语音提示信息。例如,该初始语音提示信息用于在救援人员到达救援现场后安抚被救援者,或者通过“提问”的方式获得系统的初始输入(例如询问被救援者的状况),从而提高被救援者的语音输入与系统所需要的输入之间的匹配程度,从而提升救援效率。
在一些实施例中,上述方法还包括步骤S160(未示出)和步骤S170(未示出)。在步骤S160中,可穿戴救援设备接收所述网络设备所发送的救援提示信息,例如在提供至被救援者的语音提示以外,系统还可为志愿者额外提供救援提示,以指导 其按标准流程判断并执行救援操作,从而提升救援者的救援效率,例如经验并不丰富的救援者也能在系统的提示下执行正确的操作;而在步骤S170中,可穿戴救援设备向所述可穿戴救援设备的救援用户提供所述救援提示信息。该救援提示信息在一些实施例中亦以语音输出的方式被提供至相应的救援人员。
其中在一些实施例中,在上述步骤S130中,可穿戴救援设备通过所述可穿戴救援设备的第一音频输出单元向所述被救援对象提供所述后续语音提示信息;而在上述步骤S170中,可穿戴救援设备通过所述可穿戴救援设备的第二音频输出单元向所述可穿戴救援设备的救援用户提供所述救援提示信息。其中所述第一音频输出单元和所述第二音频输出单元相互独立,以分别向被救援者和救援人员分别提供其所需的信息且避免提示信息的相互干扰,从而进一步提升救援效率。例如,所述第一音频输出单元为扬声器(或称为“外放”),而第二音频输出单元为耳塞。
在一些实施例中,上述方法还包括步骤S180(未示出)。在步骤S180中,可穿戴救援设备基于所述可穿戴救援设备的救援用户的信息补充指令,采集所述救援用户的补充语音信息,并向对应的网络设备发送所述补充语音信息。例如该信息补充指令由救援人员通过按下相关按钮或发出相应的语音指令的方式生成,用于供救援人员及时地对现场状况进行补充描述,从而使网络设备所获取的输入信息更精确,进而提升语音提示信息的准确性和救援效率。随后在步骤S120中,可穿戴救援设备接收所述网络设备基于所述语音信息以及所述补充语音信息所返回的后续语音提示信息。
在一些实施例中,为减少不必要的干扰、尽可能采集来自被救援者的有用信息以提升救援效率,在步骤S110中,若提示终止条件未被满足,可穿戴救援设备响应于所述可穿戴救援设备的救援用户的语音采集指令,采集救援现场的被救援对象的语音信息,并向对应的网络设备发送所述语音信息。在此,在一些实施例中,所述语音采集指令由救援人员通过按压物理按键或者发出语音指令的方式发出,并由可穿戴救援设备执行。
在一些实施例中,上述方法还包括步骤S190(未示出)。在步骤S190中,可穿戴救援设备获取救援现场的被救援对象的身份识别信息,并向对应的网络设备发送所述身份识别信息,其中所述身份识别信息用于供对应的网络设备识别所述被救援对象的身份信息;随后在步骤S120中,可穿戴救援设备接收所述网络设备基于所述语音信息以及所述身份识别信息所返回的后续语音提示信息。其中,所述身份识别信息为用于面部识别的照片、用于声纹识别的语音、用于身份识别的指纹/血样/虹膜信息(可由外接设备采集)等,用于识别被救援对象的身份信息,在一些实施 例中系统根据这些身份识别信息确定被救援对象的身份信息后可进一步确定被救援对象的救援辅助信息,包括但不限于被救援对象的语种信息(用于基于该语种信息提供相应语种的提示信息或者提供翻译)、过敏信息(用于确定被救援对象的过敏状况,避免引入过敏原)、紧急联系人信息(用于联系相应的紧急联系人以提供必要的协助)等,以确保被救援对象的安全和提升救援效率。
在一些实施例中,以上所述的提示终止事件包括:
-所述可穿戴救援设备接收到所述网络设备所发送的提示终止指令,例如在被救援对象所面临的紧急状况已经得到合理处置或者具体处置方式已经最终确定的情况下,网络设备生成并向可穿戴救援设备发送该提示终止指令;和/或
-所述可穿戴救援设备检测到救援用户的提示终止操作,例如救援人员按下可穿戴救援设备(例如智能头盔)上的提示终止按钮以终止提供语音提示。
其中在一些实施例中,语音提示是在更高级的救援单位到达现场并接手后终止的。为了让接手的救援人员迅速掌握现场状况,可穿戴救援设备可提交从紧急事件发生开始至当前时刻的事件报告,该事件报告可通过包括但不限于蓝牙、近场通信、电子邮件等方式传输至更高级救援单位对应的计算设备。相应地,在步骤S140中,可穿戴救援设备当所述可穿戴救援设备检测到救援用户的提示终止操作时,判定所述提示终止条件被满足,并向对应的更高级救援单位提交救援活动报告,其中所述救援活动报告包括所述被救援对象的历史语音信息以及所述网络设备的历史提示信息。例如,可穿戴救援设备提交其在此期间所采集到的信息以及向被救援对象所提供的信息。在一些实施例中,上述救援活动报告可由系统自动生成,并通过其他操作(例如用户按压物理按键、给出语音指令等)而提交至更高级救援单位。在一些实施例中,为减少上述可穿戴救援设备的传输压力,或者通过在上述更高级的救援单位前往现场的途中即发送事件报告的途径使该更高级的救援单位提前掌握现场状况以提升救援效率,在上述向对应的更高级救援单位提交救援活动报告的步骤中,可穿戴救援设备向所述网络设备发送救援报告提交请求,所述救援报告提交请求用于供所述网络设备向对应的更高级救援单位发送救援活动报告。
与以上所述的在可穿戴救援设备端用于提供救援语音提示的方法相对应地,根据本申请的另一方面,还提供了一种在网络设备端用于提供救援语音提示的方法。参考图4,该方法包括步骤S210、步骤S220和步骤S230。
其中,在步骤S210中,网络设备接收救援现场的被救援对象的语音信息,其中所述语音信息是由对应的可穿戴救援设备采集并发送的,例如可穿戴救援设备基于其内置的麦克风采集该语音信息。
在步骤S220中,网络设备对所述语音信息执行语音识别操作,并基于对应的语音识别结果确定相应的后续语音提示信息。其中所述后续语音提示信息用于供所述可穿戴救援设备提供至所述被救援对象。在一些实施例中,网络设备首先提供一条语音提示信息至可穿戴救援设备,以供可穿戴救援设备播放,并基于被救援对象所提供的语音反馈进行语音识别,确定被救援对象所反馈的内容,再基于该反馈内容确定后一条语音提示信息,并将该后一条语音提示信息提供至可穿戴救援设备以供其播放。
在步骤S230中,网络设备向所述可穿戴救援设备返回所述后续语音提示信息。
其中在一些实施例中网络设备基于一决策树确定一条语音提示信息的下一语音提示信息,从而能够为被救援者提供连续的语音提示,以安抚被救援者,且能逐步且精确地确定被救援者所需采取的最终的处置手段。例如该决策树的每个节点均对应于一条语音提示信息,且每个节点的后续节点(如有)是基于被救援对象所反馈的信息确定的。相应地,在一些实施例中,上述步骤S220包括子步骤S221(未示出)、子步骤S222(未示出)和子步骤S223(未示出)。其中在子步骤S221中,网络设备对所述语音信息执行语音识别操作,以确定对应的语音识别结果;在子步骤S222中,网络设备将所述语音识别结果应用于救援决策树的当前节点,以确定所述救援决策树的相应的后续节点;在子步骤S223中,网络设备基于所述后续节点确定相应的后续语音提示信息。
其中,在一些实施例中,在上述子步骤S223中,网络设备基于所述后续节点确定相应的后续语音提示信息以及救援提示信息,其中所述救援提示信息用于供所述可穿戴救援设备提供至所述可穿戴救援设备的救援用户。例如在提供至被救援者的语音提示以外,系统还可为志愿者额外提供救援提示,以指导其执行救援操作,从而提升救援者的救援效率,例如经验并不丰富的救援者也能在系统的提示下执行正确的操作。
在一些实施例中,救援人员对现场的补充描述作为系统的输入,可进一步提升系统所提供的输出的精确性,例如救援人员可及时地对现场状况进行补充描述,从而使网络设备所获取的输入信息更精确,进而提升语音提示信息的准确性和救援效率。相应地,上述方法还包括步骤S240(未示出)。在步骤S240中,网络设备接收所述可穿戴救援设备所发送的救援用户的补充语音信息,其中所述补充语音信息是由所述可穿戴救援设备采集的;上述步骤S220还包括子步骤S224(未示出),在该子步骤S224中网络设备对所述补充语音信息执行语音识别操作,以确定对应的补充语音识别结果。随后在子步骤S222中,网络设备将所述语音识别结果以及 所述补充语音识别结果应用于救援决策树的当前节点,以确定所述救援决策树的相应的后续节点。例如,网络设备融合所述语音识别结果以及所述补充语音识别结果中的关键词,并将融合的结果(例如关键词集合的并集)作为输入应用于上述救援决策树的当前节点。
其中若上述救援决策树的当前节点为“叶子”节点,即系统已确定最终的救援提示,即可结束与被救援者的连续对话,以便现场救援人员集中精力施救。相应地,在一些实施例中,上述方法还包括步骤S250(未示出)。在步骤S250中,若所述后续节点为所述救援决策树的叶子节点,网络设备向所述可穿戴救援设备发送提示终止指令,所述提示终止指令用于供所述可穿戴救援设备终止提供救援语音提示。在一些实施例中,上述“叶子”节点可包括一编辑项,该编辑项为高级别救援单位到达现场时是否终止提供救援提示的终止条件的编辑项。
在一些实施例中,上述方法还包括步骤S260(未示出)。在步骤S260中,网络设备接收对应的可穿戴救援设备所发送的身份识别信息,并基于所述身份识别信息识别救援现场的被救援对象的身份信息。随后在步骤S220中,网络设备基于所述身份信息,对所述语音信息执行语音识别操作,并基于对应的语音识别结果以及所述身份信息确定相应的后续语音提示信息,例如在识别出被救援者的身份之后确定其语种信息,以供后续提供相应语种的语音提示。所述身份识别信息用于供对应的网络设备识别所述被救援对象的身份信息。其中,所述身份识别信息为用于面部识别的照片、用于声纹识别的语音、用于身份识别的指纹/血样信息(可由外接设备采集)等,用于识别被救援对象的身份信息,在一些实施例中系统根据这些身份识别信息确定被救援对象的身份信息后可进一步确定被救援对象的救援辅助信息,包括但不限于被救援对象的语种信息(用于基于该语种信息提供相应语种的提示信息或者提供翻译)、过敏信息(用于确定被救援对象的过敏状况,避免引入过敏原)、紧急联系人信息(用于联系相应的紧急联系人以提供必要的协助)等,以确保被救援对象的安全和提升救援效率。
其中在一些实施例中,语音提示是在更高级的救援单位到达现场并接手后终止的。为了让接手的救援人员迅速掌握现场状况,可穿戴救援设备可提交从紧急事件发生开始至当前时刻的事件报告,该时间报告可通过包括但不限于蓝牙、近场通信、电子邮件等方式传输至更高级救援单位对应的计算设备。为减少上述可穿戴救援设备的传输压力,或者通过在上述更高级的救援单位前往现场的途中即发送事件报告的途径使该更高级的救援单位提前掌握现场状况以提升救援效率,上述事件报告可由网络设备发送至更高级救援单位。相应地,在一些实施例中,上述方法还包括步 骤S270(未示出)。在步骤S270中,网络设备基于所述可穿戴救援设备所发送的救援报告提交请求,向对应的更高级救援单位发送救援活动报告,其中所述救援活动报告包括所述被救援对象的历史语音信息以及所述网络设备的历史提示信息。
根据本申请的一个方面,提供了一种用于提供救援语音提示的可穿戴救援设备100。参考图5,该可穿戴救援设备100包括第一一模块110、第一二模块120、第一三模块130和第一四模块140。
其中,第一一模块110若提示终止条件未被满足,采集救援现场的被救援对象的语音信息(例如,基于所述可穿戴救援设备内置的麦克风采集该语音信息),并向对应的网络设备发送所述语音信息。在一些实施例中,若所述提示终止条件被满足,所述可穿戴救援设备将停止采集被救援对象的语音信息(或周围的环境音),并停止向被救援对象提供连续的语音提示。例如,在被救援对象所面临的紧急状况已经得到合理处置或者具体处置方式已经最终确定的情况下,或者在更高级的救援人员(例如具有更强的救援能力的医疗小组)已到现场的情况下,可穿戴救援设备将终止向被救援对象提供的语音提示,以免在进一步的救援过程中对被救援对象或者相关的救援人员造成干扰。
第一二模块120接收所述网络设备基于所述语音信息所返回的后续语音提示信息。在一些实施例中,该后续语音提示信息是由人工实时发出的或由人工事先录制的,而在另一些实施例中该后续语音提示信息则是由计算设备(例如服务器等网络设备)合成的。其中,在一些实施例中该后续语音提示信息是基于被救援者(或者为其提供救援的救援人员)提供至网络设备的信息(例如通过语音输入的方式提供)确定的,例如该后续语音提示信息在被救援者或者救援人员基于前一语音提示提供反馈之后确定的,并且系统可基于被救援者或者救援人员对该后续语音提示信息所提供的反馈确定并提供下一后续语音提示信息,直至上述提示终止条件被满足。
第一三模块130向所述被救援对象提供所述后续语音提示信息,例如所述可穿戴救援设备在一些实施例中通过所述可穿戴救援设备的输出单元(例如扬声器单元)输出所述后续语音提示信息。
第一四模块140当至少一个预设的提示终止事件发生时,判定所述提示终止条件被满足,以结束连续提供至被救援者的语音提示。其中,所述提示终止事件用于终止连续的语音提示,例如该提示终止事件包括但不限于佩戴可穿戴救援设备的救援用户主动终止语音提示,或者系统检测到相应的终止条件已被满足。
在一些实施例中,上述系统是基于决策树实现的,例如该决策树的每个节点均 对应于一语音提示信息,且系统基于被救援者(或者救援人员)的反馈信息确定该节点对应的分支,从而确定该节点的后续节点,并输出后续节点对应的语音提示信息。
在一些实施例中,上述第一一模块110之前还包括第一五模块150(未示出)。第一五模块150向救援现场的被救援对象提供初始语音提示信息。例如,该初始语音提示信息用于在救援人员到达救援现场后安抚被救援者,或者通过“提问”的方式获得系统的初始输入(例如询问被救援者的状况),从而提高被救援者的语音输入与系统所需要的输入之间的匹配程度,从而提升救援效率。
在一些实施例中,上述可穿戴救援设备还包括第一六模块160(未示出)和第一七模块170(未示出)。第一六模块160接收所述网络设备所发送的救援提示信息,例如在提供至被救援者的语音提示以外,系统还可为志愿者额外提供救援提示,以指导其按标准流程判断并执行救援操作,从而提升救援者的救援效率,例如经验并不丰富的救援者也能在系统的提示下执行正确的操作;第一七模块170向所述可穿戴救援设备的救援用户提供所述救援提示信息。该救援提示信息在一些实施例中亦以语音输出的方式被提供至相应的救援人员。
其中在一些实施例中,上述第一三模块130通过所述可穿戴救援设备的第一音频输出单元向所述被救援对象提供所述后续语音提示信息;而第一七模块170通过所述可穿戴救援设备的第二音频输出单元向所述可穿戴救援设备的救援用户提供所述救援提示信息。其中所述第一音频输出单元和所述第二音频输出单元相互独立,以分别向被救援者和救援人员分别提供其所需的信息且避免提示信息的相互干扰,从而进一步提升救援效率。例如,所述第一音频输出单元为扬声器(或称为“外放”),而第二音频输出单元为耳塞。
在一些实施例中,上述可穿戴救援设备还包括第一八模块180(未示出)。第一八模块180基于所述可穿戴救援设备的救援用户的信息补充指令,采集所述救援用户的补充语音信息,并向对应的网络设备发送所述补充语音信息。例如该信息补充指令由救援人员通过按下相关按钮或发出相应的语音指令的方式生成,用于供救援人员及时地对现场状况进行补充描述,从而使网络设备所获取的输入信息更精确,进而提升语音提示信息的准确性和救援效率。随后第一二模块120接收所述网络设备基于所述语音信息以及所述补充语音信息所返回的后续语音提示信息。
在一些实施例中,为减少不必要的干扰、尽可能采集来自被救援者的有用信息以提升救援效率,若提示终止条件未被满足,第一一模块110响应于所述可穿戴救援设备的救援用户的语音采集指令,采集救援现场的被救援对象的语音信息,并向 对应的网络设备发送所述语音信息。在此,在一些实施例中,所述语音采集指令由救援人员通过按压物理按键或者发出语音指令的方式发出,并由可穿戴救援设备执行。
在一些实施例中,上述可穿戴救援设备还包括第一九模块190(未示出)。第一九模块190获取救援现场的被救援对象的身份识别信息,并向对应的网络设备发送所述身份识别信息,其中所述身份识别信息用于供对应的网络设备识别所述被救援对象的身份信息;随后第一二模块120接收所述网络设备基于所述语音信息以及所述身份识别信息所返回的后续语音提示信息。其中,所述身份识别信息为用于面部识别的照片、用于声纹识别的语音、用于身份识别的指纹/血样/虹膜信息(可由外接设备采集)等,用于识别被救援对象的身份信息,在一些实施例中系统根据这些身份识别信息确定被救援对象的身份信息后可进一步确定被救援对象的救援辅助信息,包括但不限于被救援对象的语种信息(用于基于该语种信息提供相应语种的提示信息或者提供翻译)、过敏信息(用于确定被救援对象的过敏状况,避免引入过敏原)、紧急联系人信息(用于联系相应的紧急联系人以提供必要的协助)等,以确保被救援对象的安全和提升救援效率。
在一些实施例中,以上所述的提示终止事件包括:
-所述可穿戴救援设备接收到所述网络设备所发送的提示终止指令,例如在被救援对象所面临的紧急状况已经得到合理处置或者具体处置方式已经最终确定的情况下,网络设备生成并向可穿戴救援设备发送该提示终止指令;和/或
-所述可穿戴救援设备检测到救援用户的提示终止操作,例如救援人员按下可穿戴救援设备(例如智能头盔)上的提示终止按钮以终止提供语音提示。
其中在一些实施例中,语音提示是在更高级的救援单位到达现场并接手后终止的。为了让接手的救援人员迅速掌握现场状况,可穿戴救援设备可提交从紧急事件发生开始至当前时刻的事件报告,该事件报告可通过包括但不限于蓝牙、近场通信、电子邮件等方式传输至更高级救援单位对应的计算设备。相应地,第一四模块140当所述可穿戴救援设备检测到救援用户的提示终止操作时,判定所述提示终止条件被满足,并向对应的更高级救援单位提交救援活动报告,其中所述救援活动报告包括所述被救援对象的历史语音信息以及所述网络设备的历史提示信息。例如,可穿戴救援设备提交其在此期间所采集到的信息以及向被救援对象所提供的信息。在一些实施例中,上述救援活动报告可由系统自动生成,并通过其他操作(例如用户按压物理按键、给出语音指令等)而提交至更高级救援单位。在一些实施例中,为减少上述可穿戴救援设备的传输压力,或者通过在上述更高级的救援单位前往现场的 途中即发送事件报告的途径使该更高级的救援单位提前掌握现场状况以提升救援效率,在上述向对应的更高级救援单位提交救援活动报告的步骤中,可穿戴救援设备向所述网络设备发送救援报告提交请求,所述救援报告提交请求用于供所述网络设备向对应的更高级救援单位发送救援活动报告。
与以上所述的在可穿戴救援设备端用于提供救援语音提示的方法相对应地,根据本申请的另一方面,还提供了一种用于提供救援语音提示的网络设备200。参考图6,该网络设备200包括第二一模块210、第二二模块220和第二三模块230。
其中,第二一模块210接收救援现场的被救援对象的语音信息,其中所述语音信息是由对应的可穿戴救援设备采集并发送的,例如可穿戴救援设备基于其内置的麦克风采集该语音信息。
第二二模块220对所述语音信息执行语音识别操作,并基于对应的语音识别结果确定相应的后续语音提示信息。其中所述后续语音提示信息用于供所述可穿戴救援设备提供至所述被救援对象。在一些实施例中,网络设备首先提供一条语音提示信息至可穿戴救援设备,以供可穿戴救援设备播放,并基于被救援对象所提供的语音反馈进行语音识别,确定被救援对象所反馈的内容,再基于该反馈内容确定后一条语音提示信息,并将该后一条语音提示信息提供至可穿戴救援设备以供其播放。
第二三模块230向所述可穿戴救援设备返回所述后续语音提示信息。
其中在一些实施例中网络设备基于一决策树确定一条语音提示信息的下一语音提示信息,从而能够为被救援者提供连续的语音提示,以安抚被救援者,且能逐步且精确地确定被救援者所需采取的最终的处置手段。例如该决策树的每个节点均对应于一条语音提示信息,且每个节点的后续节点(如有)是基于被救援对象所反馈的信息确定的。相应地,在一些实施例中,上述第二二模块220包括第二一子模块221(未示出)、第二二子模块222(未示出)和第二三子模块223(未示出)。其中第二一子模块221对所述语音信息执行语音识别操作,以确定对应的语音识别结果;第二二子模块222将所述语音识别结果应用于救援决策树的当前节点,以确定所述救援决策树的相应的后续节点;第二三子模块223基于所述后续节点确定相应的后续语音提示信息。
其中,在一些实施例中,上述第二三子模块223基于所述后续节点确定相应的后续语音提示信息以及救援提示信息,其中所述救援提示信息用于供所述可穿戴救援设备提供至所述可穿戴救援设备的救援用户。例如在提供至被救援者的语音提示以外,系统还可为志愿者额外提供救援提示,以指导其执行救援操作,从而提升救援者的救援效率,例如经验并不丰富的救援者也能在系统的提示下执行正确的操 作。
在一些实施例中,救援人员对现场的补充描述作为系统的输入,可进一步提升系统所提供的输出的精确性,例如救援人员可及时地对现场状况进行补充描述,从而使网络设备所获取的输入信息更精确,进而提升语音提示信息的准确性和救援效率。相应地,上述网络设备还包括第二四模块240(未示出)。第二四模块240接收所述可穿戴救援设备所发送的救援用户的补充语音信息,其中所述补充语音信息是由所述可穿戴救援设备采集的;上述第二二模块220还包括第二四子模块224(未示出),该第二四子模块224对所述补充语音信息执行语音识别操作,以确定对应的补充语音识别结果。随后第二二子模块222将所述语音识别结果以及所述补充语音识别结果应用于救援决策树的当前节点,以确定所述救援决策树的相应的后续节点。例如,网络设备融合所述语音识别结果以及所述补充语音识别结果中的关键词,并将融合的结果(例如关键词集合的并集)作为输入应用于上述救援决策树的当前节点。
其中若上述救援决策树的当前节点为“叶子”节点,即系统已确定最终的救援提示,即可结束与被救援者的连续对话,以便现场救援人员集中精力施救。相应地,在一些实施例中,上述网络设备还包括第二五模块250(未示出)。若所述后续节点为所述救援决策树的叶子节点,第二五模块250向所述可穿戴救援设备发送提示终止指令,所述提示终止指令用于供所述可穿戴救援设备终止提供救援语音提示。在一些实施例中,上述“叶子”节点可包括一编辑项,该编辑项为高级别救援单位到达现场时是否终止提供救援提示的终止条件的编辑项。
在一些实施例中,上述网络设备还包括第二六模块260(未示出)。第二六模块260接收对应的可穿戴救援设备所发送的身份识别信息,并基于所述身份识别信息识别救援现场的被救援对象的身份信息。随后第二二模块220基于所述身份信息,对所述语音信息执行语音识别操作,并基于对应的语音识别结果以及所述身份信息确定相应的后续语音提示信息,例如在识别出被救援者的身份之后确定其语种信息,以供后续提供相应语种的语音提示。所述身份识别信息用于供对应的网络设备识别所述被救援对象的身份信息。其中,所述身份识别信息为用于面部识别的照片、用于声纹识别的语音、用于身份识别的指纹/血样信息(可由外接设备采集)等,用于识别被救援对象的身份信息,在一些实施例中系统根据这些身份识别信息确定被救援对象的身份信息后可进一步确定被救援对象的救援辅助信息,包括但不限于被救援对象的语种信息(用于基于该语种信息提供相应语种的提示信息或者提供翻译)、过敏信息(用于确定被救援对象的过敏状况,避免引入过敏原)、紧急联系 人信息(用于联系相应的紧急联系人以提供必要的协助)等,以确保被救援对象的安全和提升救援效率。
其中在一些实施例中,语音提示是在更高级的救援单位到达现场并接手后终止的。为了让接手的救援人员迅速掌握现场状况,可穿戴救援设备可提交从紧急事件发生开始至当前时刻的事件报告,该时间报告可通过包括但不限于蓝牙、近场通信、电子邮件等方式传输至更高级救援单位对应的计算设备。为减少上述可穿戴救援设备的传输压力,或者通过在上述更高级的救援单位前往现场的途中即发送事件报告的途径使该更高级的救援单位提前掌握现场状况以提升救援效率,上述事件报告可由网络设备发送至更高级救援单位。相应地,在一些实施例中,上述网络设备还包括第二七模块270(未示出)。第二七模块270基于所述可穿戴救援设备所发送的救援报告提交请求,向对应的更高级救援单位发送救援活动报告,其中所述救援活动报告包括所述被救援对象的历史语音信息以及所述网络设备的历史提示信息。
本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机代码,当所述计算机代码被执行时,如前任一项所述的方法被执行。
本申请还提供了一种计算机程序产品,当所述计算机程序产品被计算机设备执行时,如前任一项所述的方法被执行。
本申请还提供了一种计算机设备,所述计算机设备包括:
一个或多个处理器;
存储器,用于存储一个或多个计算机程序;
当所述一个或多个计算机程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如前任一项所述的方法。
图7示出了可被用于实施本申请中所述的各个实施例的示例性系统。
如图7所示,在一些实施例中,系统1000能够作为各所述实施例中的任意一个可穿戴救援设备或网络设备。在一些实施例中,系统1000可包括具有指令的一个或多个计算机可读介质(例如,系统存储器或NVM/存储设备1020)以及与该一个或多个计算机可读介质耦合并被配置为执行指令以实现模块从而执行本申请中所述的动作的一个或多个处理器(例如,(一个或多个)处理器1005)。
对于一个实施例,系统控制模块1010可包括任意适当的接口控制器,以向(一个或多个)处理器1005中的至少一个和/或与系统控制模块1010通信的任意适当的设备或组件提供任意适当的接口。
系统控制模块1010可包括存储器控制器模块1030,以向系统存储器1015提供接口。存储器控制器模块1030可以是硬件模块、软件模块和/或固件模块。
系统存储器1015可被用于例如为系统1000加载和存储数据和/或指令。对于一个实施例,系统存储器1015可包括任意适当的易失性存储器,例如,适当的DRAM。在一些实施例中,系统存储器1015可包括双倍数据速率类型四同步动态随机存取存储器(DDR4SDRAM)。
对于一个实施例,系统控制模块1010可包括一个或多个输入/输出(I/O)控制器,以向NVM/存储设备1020及(一个或多个)通信接口1025提供接口。
例如,NVM/存储设备1020可被用于存储数据和/或指令。NVM/存储设备1020可包括任意适当的非易失性存储器(例如,闪存)和/或可包括任意适当的(一个或多个)非易失性存储设备(例如,一个或多个硬盘驱动器(Hard Disk,HDD)、一个或多个光盘(CD)驱动器和/或一个或多个数字通用光盘(DVD)驱动器)。
NVM/存储设备1020可包括在物理上作为系统1000被安装在其上的设备的一部分的存储资源,或者其可被该设备访问而不必作为该设备的一部分。例如,NVM/存储设备1020可通过网络经由(一个或多个)通信接口1025进行访问。
(一个或多个)通信接口1025可为系统1000提供接口以通过一个或多个网络和/或与任意其他适当的设备通信。系统1000可根据一个或多个无线网络标准和/或协议中的任意标准和/或协议来与无线网络的一个或多个组件进行无线通信。
对于一个实施例,(一个或多个)处理器1005中的至少一个可与系统控制模块1010的一个或多个控制器(例如,存储器控制器模块1030)的逻辑封装在一起。对于一个实施例,(一个或多个)处理器1005中的至少一个可与系统控制模块1010的一个或多个控制器的逻辑封装在一起以形成系统级封装(SiP)。对于一个实施例,(一个或多个)处理器1005中的至少一个可与系统控制模块1010的一个或多个控制器的逻辑集成在同一模具上。对于一个实施例,(一个或多个)处理器1005中的至少一个可与系统控制模块1010的一个或多个控制器的逻辑集成在同一模具上以形成片上系统(SoC)。
在各个实施例中,系统1000可以但不限于是:服务器、工作站、台式计算设备或移动计算设备(例如,膝上型计算设备、手持计算设备、平板电脑、上网本等)。在各个实施例中,系统1000可具有更多或更少的组件和/或不同的架构。例如,在一些实施例中,系统1000包括一个或多个摄像机、键盘、液晶显示器(LCD)屏幕(包括触屏显示器)、非易失性存储器端口、多个天线、图形芯片、专用集成电路(ASIC)和扬声器。
需要注意的是,本申请可在软件和/或软件与硬件的组合体中被实施,例如,可采用专用集成电路(ASIC)、通用目的计算机或任何其他类似硬件设备来实现。 在一个实施例中,本申请的软件程序可以通过处理器执行以实现上文所述步骤或功能。同样地,本申请的软件程序(包括相关的数据结构)可以被存储到计算机可读记录介质中,例如,RAM存储器,磁或光驱动器或软磁盘及类似设备。另外,本申请的一些步骤或功能可采用硬件来实现,例如,作为与处理器配合从而执行各个步骤或功能的电路。
另外,本申请的一部分可被应用为计算机程序产品,例如计算机程序指令,当其被计算机执行时,通过该计算机的操作,可以调用或提供根据本申请的方法和/或技术方案。本领域技术人员应能理解,计算机程序指令在计算机可读介质中的存在形式包括但不限于源文件、可执行文件、安装包文件等,相应地,计算机程序指令被计算机执行的方式包括但不限于:该计算机直接执行该指令,或者该计算机编译该指令后再执行对应的编译后程序,或者该计算机读取并执行该指令,或者该计算机读取并安装该指令后再执行对应的安装后程序。在此,计算机可读介质可以是可供计算机访问的任意可用的计算机可读存储介质或通信介质。
通信介质包括藉此包含例如计算机可读指令、数据结构、程序模块或其他数据的通信信号被从一个系统传送到另一系统的介质。通信介质可包括有导的传输介质(诸如电缆和线(例如,光纤、同轴等))和能传播能量波的无线(未有导的传输)介质,诸如声音、电磁、RF、微波和红外。计算机可读指令、数据结构、程序模块或其他数据可被体现为例如无线介质(诸如载波或诸如被体现为扩展频谱技术的一部分的类似机制)中的已调制数据信号。术语“已调制数据信号”指的是其一个或多个特征以在信号中编码信息的方式被更改或设定的信号。调制可以是模拟的、数字的或混合调制技术。
作为示例而非限制,计算机可读存储介质可包括以用于存储诸如计算机可读指令、数据结构、程序模块或其它数据的信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动的介质。例如,计算机可读存储介质包括,但不限于,易失性存储器,诸如随机存储器(RAM,DRAM,SRAM);以及非易失性存储器,诸如闪存、各种只读存储器(ROM,PROM,EPROM,EEPROM)、磁性和铁磁/铁电存储器(MRAM,FeRAM);以及磁性和光学存储设备(硬盘、磁带、CD、DVD);或其它现在已知的介质或今后开发的能够存储供计算机系统使用的计算机可读信息/数据。
在此,根据本申请的一个实施例包括一个装置,该装置包括用于存储计算机程序指令的存储器和用于执行程序指令的处理器,其中,当该计算机程序指令被该处理器执行时,触发该装置运行基于前述根据本申请的多个实施例的方法和/或技术方案。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。装置权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。

Claims (20)

  1. 一种在可穿戴救援设备端用于提供救援语音提示的方法,其中,所述方法包括:
    若提示终止条件未被满足,采集救援现场的被救援对象的语音信息,并向对应的网络设备发送所述语音信息;
    接收所述网络设备基于所述语音信息所返回的后续语音提示信息;
    向所述被救援对象提供所述后续语音提示信息;
    当至少一个预设的提示终止事件发生时,判定所述提示终止条件被满足。
  2. 根据权利要求1所述的方法,其中,在采集救援现场的被救援对象的语音信息,并向对应的网络设备发送所述语音信息之前,所述方法还包括:
    向救援现场的被救援对象提供初始语音提示信息。
  3. 根据权利要求1所述的方法,其中,所述方法还包括:
    接收所述网络设备所发送的救援提示信息;
    向所述可穿戴救援设备的救援用户提供所述救援提示信息。
  4. 根据权利要求3所述的方法,其中,所述向所述被救援对象提供所述后续语音提示信息,包括:
    通过所述可穿戴救援设备的第一音频输出单元向所述被救援对象提供所述后续语音提示信息;
    所述向所述可穿戴救援设备的救援用户提供所述救援提示信息,包括:
    通过所述可穿戴救援设备的第二音频输出单元向所述可穿戴救援设备的救援用户提供所述救援提示信息。
  5. 根据权利要求1至4中任一项所述的方法,其中,所述方法还包括:
    基于所述可穿戴救援设备的救援用户的信息补充指令,采集所述救援用户的补充语音信息,并向对应的网络设备发送所述补充语音信息;
    所述接收所述网络设备基于所述语音信息所返回的后续语音提示信息,包括:
    接收所述网络设备基于所述语音信息以及所述补充语音信息所返回的后续语音提示信息。
  6. 根据权利要求1所述的方法,其中,所述若提示终止条件未被满足,采集救援现场的被救援对象的语音信息,并向对应的网络设备发送所述语音信息,包括:
    若提示终止条件未被满足,响应于所述可穿戴救援设备的救援用户的语音采集指令,采集救援现场的被救援对象的语音信息,并向对应的网络设备发送所述语音 信息。
  7. 根据权利要求1所述的方法,其中,所述方法还包括:
    获取救援现场的被救援对象的身份识别信息,并向对应的网络设备发送所述身份识别信息,其中所述身份识别信息用于供对应的网络设备识别所述被救援对象的身份信息;
    所述接收所述网络设备基于所述语音信息所返回的后续语音提示信息,包括:
    接收所述网络设备基于所述语音信息以及所述身份识别信息所返回的后续语音提示信息。
  8. 根据权利要求1所述的方法,其中,所述提示终止事件包括以下至少任一项:
    所述可穿戴救援设备接收到所述网络设备所发送的提示终止指令;
    所述可穿戴救援设备检测到救援用户的提示终止操作。
  9. 根据权利要求8所述的方法,其中所述当至少一个预设的提示终止事件发生时,判定所述提示终止条件被满足,包括:
    当所述可穿戴救援设备检测到救援用户的提示终止操作时,判定所述提示终止条件被满足,并向对应的更高级救援单位提交救援活动报告,其中所述救援活动报告包括所述被救援对象的历史语音信息以及所述网络设备的历史提示信息。
  10. 根据权利要求9所述的方法,其中,所述向对应的更高级救援单位提交救援活动报告,包括:
    向所述网络设备发送救援报告提交请求,所述救援报告提交请求用于供所述网络设备向对应的更高级救援单位发送救援活动报告。
  11. 一种在网络设备端用于提供救援语音提示的方法,其中,所述方法包括:
    接收救援现场的被救援对象的语音信息,其中所述语音信息是由对应的可穿戴救援设备采集并发送的;
    对所述语音信息执行语音识别操作,并基于对应的语音识别结果确定相应的后续语音提示信息;
    向所述可穿戴救援设备返回所述后续语音提示信息;
    其中所述后续语音提示信息用于供所述可穿戴救援设备提供至所述被救援对象。
  12. 根据权利要求11所述的方法,其中,所述对所述语音信息执行语音识别操作,并基于对应的语音识别结果确定相应的后续语音提示信息,包括:
    对所述语音信息执行语音识别操作,以确定对应的语音识别结果;
    将所述语音识别结果应用于救援决策树的当前节点,以确定所述救援决策树的相应的后续节点;
    基于所述后续节点确定相应的后续语音提示信息。
  13. 根据权利要求12所述的方法,其中,所述基于所述后续节点确定相应的后续语音提示信息,包括:
    基于所述后续节点确定相应的后续语音提示信息以及救援提示信息,其中所述救援提示信息用于供所述可穿戴救援设备提供至所述可穿戴救援设备的救援用户。
  14. 根据权利要求12所述的方法,其中,所述方法还包括:
    接收所述可穿戴救援设备所发送的救援用户的补充语音信息,其中所述补充语音信息是由所述可穿戴救援设备采集的;
    所述对所述语音信息执行语音识别操作,并基于对应的语音识别结果确定相应的后续语音提示信息,还包括:
    对所述补充语音信息执行语音识别操作,以确定对应的补充语音识别结果;
    所述将所述语音识别结果应用于救援决策树的当前节点,以确定所述救援决策树的相应的后续节点,包括:
    将所述语音识别结果以及所述补充语音识别结果应用于救援决策树的当前节点,以确定所述救援决策树的相应的后续节点。
  15. 根据权利要求12所述的方法,其中,所述方法还包括:
    若所述后续节点为所述救援决策树的叶子节点,向所述可穿戴救援设备发送提示终止指令,所述提示终止指令用于供所述可穿戴救援设备终止提供救援语音提示。
  16. 根据权利要求11所述的方法,其中,所述方法还包括:
    接收对应的可穿戴救援设备所发送的身份识别信息;
    基于所述身份识别信息识别救援现场的被救援对象的身份信息;
    所述对所述语音信息执行语音识别操作,并基于对应的语音识别结果确定相应的后续语音提示信息,包括:
    基于所述身份信息,对所述语音信息执行语音识别操作,并基于对应的语音识别结果以及所述身份信息确定相应的后续语音提示信息。
  17. 根据权利要求11所述的方法,其中,所述方法还包括:
    基于所述可穿戴救援设备所发送的救援报告提交请求,向对应的更高级救援单位发送救援活动报告,其中所述救援活动报告包括所述被救援对象的历史语音信息以及所述网络设备的历史提示信息。
  18. 一种用于提供救援语音提示的方法,其中,所述方法包括:
    若提示终止条件未被满足,可穿戴救援设备采集救援现场的被救援对象的语音信息,并向对应的网络设备发送所述语音信息;
    所述网络设备接收救援现场的被救援对象的语音信息,对所述语音信息执行语音识别操作,基于对应的语音识别结果确定相应的后续语音提示信息,并向所述可穿戴救援设备返回所述后续语音提示信息;
    所述可穿戴救援设备接收所述网络设备基于所述语音信息所返回的后续语音提示信息,并向所述被救援对象提供所述后续语音提示信息;
    当至少一个预设的提示终止事件发生时,所述可穿戴救援设备判定所述提示终止条件被满足。
  19. 一种用于提供救援语音提示的设备,其中,该设备包括:
    处理器;以及
    被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行根据权利要求1至17中任一项所述方法的操作。
  20. 一种存储指令的计算机可读介质,所述指令在被执行时使得系统执行根据权利要求1至17中任一项所述方法的操作。
PCT/CN2020/085049 2019-04-19 2020-04-16 一种用于提供救援语音提示的方法与设备 WO2020211806A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910320058.7A CN110062369A (zh) 2019-04-19 2019-04-19 一种用于提供救援语音提示的方法与设备
CN201910320058.7 2019-04-19

Publications (1)

Publication Number Publication Date
WO2020211806A1 true WO2020211806A1 (zh) 2020-10-22

Family

ID=67319799

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/085049 WO2020211806A1 (zh) 2019-04-19 2020-04-16 一种用于提供救援语音提示的方法与设备

Country Status (2)

Country Link
CN (1) CN110062369A (zh)
WO (1) WO2020211806A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110062369A (zh) * 2019-04-19 2019-07-26 上海救要救信息科技有限公司 一种用于提供救援语音提示的方法与设备
CN110535960A (zh) * 2019-08-30 2019-12-03 百度在线网络技术(北京)有限公司 预警方法、装置以及电子设备
CN112992154A (zh) * 2021-05-08 2021-06-18 北京远鉴信息技术有限公司 一种基于增强型声纹库的语音身份确定方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105915593A (zh) * 2016-04-11 2016-08-31 上海救要救信息科技有限公司 用于处理救援请求的方法与设备
CN106686048A (zh) * 2016-07-28 2017-05-17 深圳市元征科技股份有限公司 一种数据传输方法及可穿戴设备
CN106817475A (zh) * 2015-11-27 2017-06-09 单正建 一种隐蔽的基于智能终端/手机及其附属或关联设备的求救方法
WO2018055234A1 (en) * 2016-09-23 2018-03-29 Nokia Technologies Oy A method, apparatus and computer program product for providing a dynamic wake-up alert
CN110062369A (zh) * 2019-04-19 2019-07-26 上海救要救信息科技有限公司 一种用于提供救援语音提示的方法与设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204028658U (zh) * 2014-06-27 2014-12-17 王哲龙 一种智能救援头盔及系统
CN104391673A (zh) * 2014-11-20 2015-03-04 百度在线网络技术(北京)有限公司 语音交互方法和装置
US10311859B2 (en) * 2016-01-16 2019-06-04 Genesys Telecommunications Laboratories, Inc. Material selection for language model customization in speech recognition for speech analytics
CN107038840A (zh) * 2016-02-04 2017-08-11 中兴通讯股份有限公司 一种可穿戴设备的信息处理方法、装置及可穿戴设备
CN107359889A (zh) * 2017-06-27 2017-11-17 苏州美天网络科技有限公司 基于物联网的消防人员定位系统
CN109426900A (zh) * 2017-08-25 2019-03-05 山东万里红信息技术有限公司 基于物联网的移动应急指挥系统
CN109480377A (zh) * 2018-11-30 2019-03-19 迅捷安消防及救援科技(深圳)有限公司 消防及救援智能头盔、通话控制方法及相关产品

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106817475A (zh) * 2015-11-27 2017-06-09 单正建 一种隐蔽的基于智能终端/手机及其附属或关联设备的求救方法
CN105915593A (zh) * 2016-04-11 2016-08-31 上海救要救信息科技有限公司 用于处理救援请求的方法与设备
CN106686048A (zh) * 2016-07-28 2017-05-17 深圳市元征科技股份有限公司 一种数据传输方法及可穿戴设备
WO2018055234A1 (en) * 2016-09-23 2018-03-29 Nokia Technologies Oy A method, apparatus and computer program product for providing a dynamic wake-up alert
CN110062369A (zh) * 2019-04-19 2019-07-26 上海救要救信息科技有限公司 一种用于提供救援语音提示的方法与设备

Also Published As

Publication number Publication date
CN110062369A (zh) 2019-07-26

Similar Documents

Publication Publication Date Title
WO2020211806A1 (zh) 一种用于提供救援语音提示的方法与设备
US20230014971A1 (en) Automated clinical documentation system and method
WO2020182096A1 (zh) 一种用于生成救援指令的方法与设备
WO2019032839A1 (en) SYSTEM AND METHOD FOR AUTOMATED CLINICAL DOCUMENTATION
JP6891172B2 (ja) 鼻振動を介した音響のキャプチャ及び生成のためのシステム
Principi et al. An integrated system for voice command recognition and emergency detection based on audio signals
US20160019886A1 (en) Method and apparatus for recognizing whisper
WO2020216111A1 (zh) 一种救援方法与设备
WO2020244257A1 (zh) 语音唤醒方法、系统、电子设备及计算机可读存储介质
US10973458B2 (en) Daily cognitive monitoring of early signs of hearing loss
KR102374620B1 (ko) 음성 인식을 위한 전자 장치 및 시스템
WO2020211805A1 (zh) 一种用于获取被救援者的基础参数的方法与设备
US20230092558A1 (en) Automated clinical documentation system and method
JP2020009440A (ja) 情報を生成するための方法と装置
WO2020135067A1 (zh) 语音交互方法、装置、机器人及计算机可读存储介质
US20080147439A1 (en) User recognition/identification via speech for a personal health system
JPWO2020045204A1 (ja) 生体認証装置、生体認証方法およびプログラム
CN110086941A (zh) 语音播放方法、装置及终端设备
WO2023006033A1 (zh) 语音交互方法、电子设备及介质
US20220398590A1 (en) Payment verification method and payment verification system
KR102415519B1 (ko) 인공지능 음성의 컴퓨팅 탐지 장치
WO2021051403A1 (zh) 一种语音控制方法、装置、芯片、耳机及系统
Zeng et al. SHECS: A Local Smart Hands-free Elderly Care Support System on Smart AR Glasses with AI Technology
US11922970B2 (en) Electronic apparatus and controlling method thereof
TWI833678B (zh) 真實多人應答情境下的生成式聊天機器人之系統及其方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20792136

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20792136

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/02/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20792136

Country of ref document: EP

Kind code of ref document: A1