WO2020182096A1 - Procédé et dispositif pour générer une instruction de sauvetage - Google Patents

Procédé et dispositif pour générer une instruction de sauvetage Download PDF

Info

Publication number
WO2020182096A1
WO2020182096A1 PCT/CN2020/078405 CN2020078405W WO2020182096A1 WO 2020182096 A1 WO2020182096 A1 WO 2020182096A1 CN 2020078405 W CN2020078405 W CN 2020078405W WO 2020182096 A1 WO2020182096 A1 WO 2020182096A1
Authority
WO
WIPO (PCT)
Prior art keywords
rescue
information
rescued
rescued object
wearable
Prior art date
Application number
PCT/CN2020/078405
Other languages
English (en)
Chinese (zh)
Inventor
陆乐
俸安琪
徐春旭
丘富铨
田中秀治
喜熨斗智也
Original Assignee
上海救要救信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海救要救信息科技有限公司 filed Critical 上海救要救信息科技有限公司
Publication of WO2020182096A1 publication Critical patent/WO2020182096A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A62LIFE-SAVING; FIRE-FIGHTING
    • A62BDEVICES, APPARATUS OR METHODS FOR LIFE-SAVING
    • A62B99/00Subject matter not provided for in other groups of this subclass
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • This application relates to the field of rescue, in particular to a technology for generating rescue instructions.
  • One purpose of this application is to provide a method and equipment for generating rescue instructions.
  • a method for generating rescue instructions on a wearable rescue device includes:
  • a method for generating rescue instructions on a network device side including:
  • a method for generating rescue instructions including:
  • the wearable rescue device shoots live images of the rescue scene through the camera device installed in the wearable rescue equipment, and sends the live images to the corresponding network device;
  • the network device determines the rescue data information of the rescued object at the rescue site based on the scene image, and sends the rescue data information to the wearable rescue device;
  • the wearable rescue device receives rescue data information, obtains rescue instructions corresponding to the rescue data information for on-site rescue of the rescued object, and provides the rescue instructions to corresponding on-site rescue users.
  • a device for generating rescue instructions wherein the device includes:
  • a memory arranged to store computer-executable instructions which, when executed, cause the processor to perform operations according to any of the above methods.
  • a computer-readable medium storing instructions that, when executed, cause a system to perform operations according to any of the above methods.
  • this application determines the relevant information according to their identity after identifying the rescued person, and generates corresponding rescue instructions based on this information for rescuers on the scene to carry out rescue operations, which can provide the rescued person with personalized rescue
  • the plan also improves rescue efficiency, and reduces the requirements of rescue operations on the experience of rescuers and the judgment of on-site conditions.
  • Fig. 1 is a flowchart of a method for generating rescue instructions according to an embodiment of the present application
  • Fig. 2 is a flowchart of a method for generating rescue instructions on a wearable rescue device according to an embodiment of the present application
  • Fig. 3 is a flowchart of a method for generating rescue instructions on a network device according to an embodiment of the present application
  • Fig. 4 shows functional modules of a wearable rescue device according to an embodiment of the present application
  • Fig. 5 shows a functional module of a network device according to another embodiment of the present application.
  • Fig. 6 shows the functional modules of an exemplary system of the present application.
  • the terminal, the equipment of the service network, and the trusted party all include one or more processors (for example, a central processing unit (CPU)), input/output interfaces, network interfaces, and RAM.
  • processors for example, a central processing unit (CPU)
  • Memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read only memory (ROM) or flash memory (Read Only Memory). Flash Memory). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read only memory
  • Flash Memory Flash Memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), and Static Random-Access Memory (PRAM).
  • PCM Phase-Change Memory
  • PRAM Programmable Random Access Memory
  • PRAM Static Random-Access Memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable and programmable Read-only memory
  • Flash Memory Flash Memory
  • CD-ROM Compact Disc Read-Only Memory
  • DVD Digital Multi Digital Versatile Disc
  • magnetic cassettes magnetic tape storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • the equipment referred to in this application includes but is not limited to user equipment, network equipment, or equipment formed by the integration of user equipment and network equipment through a network.
  • the user equipment includes, but is not limited to, any mobile electronic product that can perform human-computer interaction with the user (for example, human-computer interaction through a touchpad), such as a smart phone, a tablet computer, etc., and the mobile electronic product can adopt any operation System, such as Android operating system, iOS operating system, etc.
  • the network device includes an electronic device that can automatically perform numerical calculation and information processing in accordance with pre-set or stored instructions, and its hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC) ), programmable logic devices (Programmable Logic Device, PLD), Field Programmable Gate Array (Field Programmable Gate Array, FPGA), Digital Signal Processor (Digital Signal Processor, DSP), embedded devices, etc.
  • ASIC Application Specific Integrated Circuit
  • PLD programmable logic devices
  • Field Programmable Gate Array Field Programmable Gate Array
  • FPGA Field Programmable Gate Array
  • DSP Digital Signal Processor
  • the network device includes, but is not limited to, a computer, a network host, a single network server, a set of multiple network servers, or a cloud composed of multiple servers; here, the cloud is composed of a large number of computers or network servers based on Cloud Computing, Among them, cloud computing is a type of distributed computing, a virtual supercomputer composed of a group of loosely coupled computer sets.
  • the network includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and a wireless ad hoc network (Ad Hoc Network).
  • the device may also be a program running on the user equipment, network equipment, or user equipment and network equipment, network equipment, touch terminal, or a device formed by integrating network equipment and touch terminal through a network.
  • a method for generating rescue instructions includes the following steps:
  • the wearable rescue device shoots live images of the rescue scene through the camera device installed in the wearable rescue equipment, and sends the live images to the corresponding network device;
  • the network device determines the rescue data information of the rescued object at the rescue site based on the scene image, and sends the rescue data information to the wearable rescue device;
  • the wearable rescue device receives rescue data information, obtains rescue instructions corresponding to the rescue data information for on-site rescue of the rescued object, and provides the rescue instructions to corresponding on-site rescue users.
  • the wearable rescue device referred to in this application is a hardware device, including but not limited to smart helmets, smart glasses or other hardware devices, which can be directly worn on the user's body or integrated into the user's clothes or other accessories.
  • it has the ability to communicate with other devices (for example, including but not limited to other user devices of the same user, or user devices of other users, cloud servers and other network devices, etc.).
  • a method for generating rescue instructions on the end of a wearable rescue device includes step S101, step S102, and step S103.
  • step S101 the wearable rescue device takes a scene image of the rescue scene through the camera device installed in the wearable rescue device.
  • the wearable rescue device determines the rescue data information of the rescued object at the rescue scene based on the scene image.
  • step S103 the wearable rescue device obtains the rescue instruction corresponding to the rescue data information for performing on-site rescue of the rescued object, and provides the rescue instruction to the corresponding on-site rescue user.
  • the wearable rescue device takes a live image of the rescue scene through the camera device installed in the wearable rescue device.
  • the camera device can be a camera module built into the wearable rescue equipment, or a camera peripheral installed on the wearable rescue equipment; in fact, those skilled in the art should understand that any existing or If camera devices that may appear in the future can be applied to this application, they are also included in the protection scope of this application and are included here by reference.
  • the imaging device is used to collect image information, and generally includes a photosensitive element for converting optical signals into electrical signals, and may also include light refraction/reflection components for adjusting the propagation path of incident light. (E.g. lens or lens assembly).
  • the camera device adjusts the angle of view with the movement of the rescuer, and takes a scene of the scene.
  • the scene of the scene may include the scene of the rescuer, for example, the face, hand card, number plate or barcode on clothing of the relevant person. / QR code, etc., to identify the identity of the corresponding person, or identify the situation on the spot.
  • the wearable rescue device determines the rescue data information of the rescued object at the rescue scene based on the scene image. For example, the wearable rescue device performs image recognition on the scene image, determines the relevant information in the scene image (such as the identity information of one or more related persons), and determines the rescue data information of the rescued object based on the relevant information determined by the recognition .
  • the rescue data information includes, but is not limited to, the rescued object's medical data information (such as medical history, recent medical advice, etc.), and the rescue precaution information of the rescued object (such as whether Installed with a pacemaker, whether there are limbs and hands that are inconvenient to perform certain rescue methods, etc.), the allergy information of the rescued object, etc., are used to provide safe and rapid rescue for the rescued object.
  • the rescue data information includes, but is not limited to, the rescued object's medical data information (such as medical history, recent medical advice, etc.), and the rescue precaution information of the rescued object (such as whether Installed with a pacemaker, whether there are limbs and hands that are inconvenient to perform certain rescue methods, etc.), the allergy information of the rescued object, etc., are used to provide safe and rapid rescue for the rescued object.
  • the wearable rescue device obtains the rescue instruction corresponding to the rescue data information for performing on-site rescue of the rescued object, and provides the rescue instruction to the corresponding on-site rescue user.
  • the above-mentioned rescue instruction is generated based on the above-mentioned rescue data information. For example, when the above-mentioned rescue data information includes allergy information, the above-mentioned rescue instruction includes an instruction to avoid the use of a certain drug, or includes an instruction to use alternative medicines or alternative rescue methods for rescue. .
  • the rescue data information is determined based on the identity information of the rescued object.
  • the aforementioned step S102 includes sub-step S1021 and sub-step S1022 (none of them are shown).
  • the wearable rescue device performs image recognition on the scene image to determine the identity information of the rescued object at the rescue scene, for example, the wearable rescue device recognizes locally or transmits the scene image to the cloud For recognition by the cloud to obtain the corresponding identity information of the rescued object, including but not limited to face recognition, iris recognition, hand cards (for example, worn on the athlete’s wrist, recorded in the form of text, barcode, etc.) Athlete related information) image recognition, etc.
  • the wearable rescue device determines the rescue data information of the rescued object based on the identity information, for example, requests the rescue data information from a related network server based on the above identity information, where in some embodiments, the The network server is provided by the medical institution, so that it can provide the rescuer with accurate rescue data.
  • the wearable rescue device determines the event information corresponding to the live image based on the live image, And based on the scene image information and the recognition result of the image recognition of the scene image, the identity information of the rescued object at the rescue scene is determined.
  • the event information includes, but is not limited to, the time, location, and number of matches the rescued object participated in, which is used to narrow the search range, thereby improving the processing efficiency of the system.
  • the above method further includes step S104 (not shown).
  • step S104 the wearable rescue device determines the language information of the rescued object according to the identity information, and provides auxiliary rescue information to the rescued object based on the language information.
  • the language information is used to characterize the language spoken by the rescued person;
  • the auxiliary rescue information includes, but is not limited to, one-way or two-way translation (translated content is provided to the rescued person), inquiry or comfort, etc., in words , Image, sound and other forms.
  • translation operations include but are not limited to text-based and voice-based translation.
  • the translation operation is used to provide corresponding assistance to the rescued person who masters a specific language to facilitate communication with rescuers.
  • the translation process can be completed by the cloud server synchronously; for inquiries or comfort, the relevant materials can be stored locally in the wearable rescue device or the wearable rescue device can immediately request the cloud server.
  • the manner of determining the language information of the rescued person based on the identity information of the rescued person described above is only an example, and is not a limitation to the specific implementation of the application.
  • the identity information of the rescued cannot be accurately determined, in order to promote communication between the rescued and the rescued to improve rescue efficiency, it is still possible to consider determining the language information of the rescued object based on The language information provides auxiliary rescue information to the rescued object.
  • the above method further includes step S105 (not shown). In this step S105, the wearable rescue device determines at least any one of the following based on the scene image:
  • the wearable rescue device determines the language information of the rescued object based on one of the above-mentioned country information, location information, and ethnic information (or a combination of several of them), and then gives directions based on the language information.
  • the rescued object provides auxiliary rescue information.
  • the above method further includes step S106 (not shown).
  • the wearable rescue device obtains the voice information of the rescued object, determines the language information of the rescued object based on the voice information, and provides auxiliary rescue to the rescued object based on the language information information.
  • the rescued object can be provided with a preset voice prompt based on the language mastered by the rescued object to reduce the burden of the rescuer.
  • the wearable rescue device provides the rescued object with voice information corresponding to the preset phrase selected by the corresponding on-site rescue user based on the language information.
  • the selection operations of the on-site rescue user can be performed based on hardware buttons or knobs, touch pads, touch screens, voice control, etc., and those skilled in the art should understand that these selection operations are only examples and not a limitation of the application. If other existing or possible future operation methods can be applied to this application, they are also included in the protection scope of this application and are included here by reference.
  • step S107 the wearable rescue device performs real-time language conversion on the rescued object based on the language information, and provides the converted voice information to the corresponding on-site rescue user.
  • a rescue operation may require the participation of multiple parties. For example, in an emergency, requiring the use of an Automated External Defibrillator (AED) and cardiopulmonary resuscitation, it is difficult for a single person to complete high-quality cardiopulmonary resuscitation. Recovery, so in order to improve the rescue success rate, other rescuers need to be notified to participate in this rescue operation.
  • the above method further includes step S108 (not shown).
  • the wearable rescue device sends a cooperation request to at least one other wearable rescue device, where the cooperation request includes a cooperation instruction corresponding to the at least one other wearable rescue device, and the cooperation instruction is used for
  • the corresponding rescue user refers to perform the corresponding operation to coordinate rescue.
  • the wearable rescue device determines a corresponding coordinated rescue operation sequence, and sends a cooperation request to at least one other wearable rescue device based on the coordinated rescue operation sequence, thereby completing the request to at least One other wearable rescue device sends a request for collaboration.
  • the rescue instruction is determined based on the coordinated rescue operation sequence.
  • the system distributes a corresponding rescue instruction or cooperation instruction to each rescuer according to the coordinated rescue operation sequence.
  • the aforementioned collaborative rescue operation sequence includes several operation steps that different users need to perform, which can be manually generated by the user of the wearable rescue device, or automatically generated by the wearable rescue device based on preset content, or the wearable rescue device Obtained by the corresponding web server request.
  • certain operations are performed by the corresponding network device to expand the computing power of the wearable rescue device and facilitate the background command center follow up the dispatch in time, and facilitate other rescue users to participate in rescue and interact.
  • the above step S102 further includes sub-step S1023 and sub-step S1024 (neither are shown).
  • the wearable rescue device sends the scene image to the corresponding network device; in sub-step S1024, the wearable rescue device receives the rescued object sent by the network device based on the scene image Rescue data information.
  • the process of obtaining rescue material information is completed by the network device and is similar to the process of obtaining the rescue data completed by the wearable rescue device described in the above embodiment.
  • the above method further includes step S109 (not shown).
  • step S109 the wearable rescue device receives the auxiliary rescue information sent by the network device, and provides the auxiliary rescue information to the rescued object.
  • the network device determines the language information of the rescued object based on the identity information, and sends auxiliary rescue information to the wearable rescue device based on the language information, and the auxiliary rescue information is provided by the wearable rescue device to the rescued object .
  • the language information is used to characterize the language spoken by the rescued person;
  • the auxiliary rescue information includes, but is not limited to, one-way or two-way translation (translated content is provided to the rescued person), inquiry or comfort, etc., in words , Image, sound and other forms.
  • the translation operation includes but is not limited to text-based and voice-based translation.
  • the translation operation is used to provide corresponding assistance to the rescued person who masters a specific language, so that he can communicate with the rescuer.
  • the manner of determining the language information of the rescued person based on the identity information of the rescued person described above is only an example, and is not a limitation to the specific implementation of the application.
  • the identity information of the rescued cannot be accurately determined, in order to promote communication between the rescued and the rescued to improve rescue efficiency, it is still possible to consider determining the language information of the rescued object based on The language information provides auxiliary rescue information to the rescued object.
  • the network device first determines at least any one of the following based on the live image:
  • the network device determines the language information of the rescued object based on one of the above-mentioned country information, location information, and ethnic information (or a combination of several of them), and then sends the information to the rescued object based on the language information.
  • the rescued object provides auxiliary rescue information.
  • the wearable rescue device acquires and sends the voice information of the rescued object to the network device, receives the auxiliary rescue information sent by the network device based on the voice information, and sends it to the The rescued object provides the auxiliary rescue information.
  • step S110 the wearable rescue device collects and transmits the real-time voice information of the rescued object to the network device, receives the converted voice information sent by the network device based on the real-time voice information, and sends it to the scene The rescue user provides the converted voice information.
  • the rescue instructions provided to rescue users may be generated by network devices.
  • the wearable rescue device receives the rescue instruction sent by the network device for on-site rescue of the rescued object, and provides the corresponding on-site rescue user.
  • the rescue instruction wherein the rescue instruction corresponds to the rescue data information.
  • a method for generating rescue instructions on a network device side is provided.
  • certain operations such as the operation of determining the rescue data information of the rescued object based on the scene image, are performed by the corresponding network device to expand the computing power of the wearable rescue device and facilitate the background
  • the command center follows up the dispatch in time, and it is convenient for other rescue users to participate in rescue and interact.
  • the method includes step S201, step S202, and step S203.
  • step S201 the network device receives the scene image of the rescue scene sent by the corresponding wearable rescue device.
  • the network device determines the rescue data information of the rescued object at the rescue scene based on the scene image.
  • the network device sends the rescue data information to the wearable rescue device.
  • the process of obtaining rescue material information is completed by the network device and is similar to the process of obtaining the rescue data completed by the wearable rescue device described in the above embodiment.
  • the above method further includes step S204 (not shown).
  • step S204 the network device determines the language information of the rescue object, and sends corresponding auxiliary rescue information to the wearable rescue device based on the language information.
  • the network device determines the language information of the rescued object based on the identity information, and sends auxiliary rescue information to the wearable rescue device based on the language information, and the auxiliary rescue information is provided by the wearable rescue device to the rescued object .
  • the language information is used to characterize the language spoken by the rescued person;
  • the auxiliary rescue information includes, but is not limited to, one-way or two-way translation (translated content is provided to the rescued person), inquiry or comfort, etc., in words , Image, sound and other forms.
  • the translation operation includes but is not limited to text-based and voice-based translation.
  • the translation operation is used to provide corresponding assistance to the rescued person who masters a specific language, so that he can communicate with the rescuer.
  • the method of determining the language information of the rescued person based on the identity information of the rescued person described above is only an example, and is not a limitation to the specific implementation of the application.
  • the identity information of the rescued cannot be accurately determined, in order to promote communication between the rescued and the rescued to improve rescue efficiency, it is still possible to consider determining the language information of the rescued object based on The language information provides auxiliary rescue information to the rescued object.
  • the network device first determines at least any one of the following based on the live image:
  • the network device determines the language information of the rescued object based on one of the above-mentioned country information, location information, and ethnic information (or a combination of several of them), and then sends the information to the rescued object based on the language information.
  • the rescued object provides auxiliary rescue information.
  • the network device receives the voice information of the rescued object sent by the wearable rescue device, and determines the language information of the rescue object based on the voice information.
  • step S205 the network device receives the real-time voice information of the rescued object sent by the wearable rescue device, and sends the converted voice information to the wearable rescue device based on the real-time voice information.
  • the rescue instruction for the rescued object is not generated locally on the device used by the rescue user on the spot, but generated by the network device, which is conducive to the system or the command user to schedule multiple rescue users on the rescue site .
  • the above method further includes step S206.
  • the network device determines a rescue instruction for on-site rescue of the rescued object based on the rescue data information, and sends the rescue instruction to the wearable rescue device.
  • a rescue operation may require the participation of multiple parties. For example, in an emergency, requiring the use of an Automated External Defibrillator (AED) and performing cardiopulmonary resuscitation, it is difficult for a single person to complete high-quality cardiopulmonary resuscitation. Therefore, in order to improve the rescue success rate, other rescuers need to be notified to participate in this rescue operation.
  • AED Automated External Defibrillator
  • the network device performs the following operations:
  • a rescue instruction for on-site rescue of the rescued object is determined, and a cooperation request corresponding to at least one other wearable rescue device, wherein the cooperation request includes the at least one Collaboration instructions corresponding to other wearable rescue equipment;
  • the rescue instruction is determined based on the coordinated rescue operation sequence. For example, the system distributes a corresponding rescue instruction or cooperation instruction to each rescuer according to the coordinated rescue operation sequence.
  • the aforementioned cooperative rescue operation sequence includes several operation steps that different users need to perform.
  • a wearable rescue device 100 for generating rescue instructions includes an image capturing module 101, a rescue data determining module 102, and a rescue instruction providing module 103.
  • the image capturing module 101 captures live images of the rescue scene through a camera device installed in the wearable rescue equipment.
  • the rescue data determining module 102 determines the rescue data information of the rescued object at the rescue site based on the scene image.
  • the rescue instruction providing module 103 obtains the rescue instruction corresponding to the rescue data information for performing on-site rescue of the rescued object, and provides the rescue instruction to the corresponding on-site rescue user.
  • the image capturing module 101 captures live images of the rescue scene through a camera device installed in the wearable rescue equipment.
  • the camera device can be a camera module built into the wearable rescue equipment, or a camera peripheral installed on the wearable rescue equipment; in fact, those skilled in the art should understand that any existing or If camera devices that may appear in the future can be applied to this application, they are also included in the protection scope of this application and are included here by reference.
  • the imaging device is used to collect image information, and generally includes a photosensitive element for converting optical signals into electrical signals, and may also include light refraction/reflection components for adjusting the propagation path of incident light. (E.g. lens or lens assembly).
  • the camera device adjusts the angle of view with the movement of the rescuer, and takes a scene of the scene.
  • the scene of the scene may include the scene of the rescuer, for example, the face, hand card, number plate or barcode on clothing of the relevant person. / QR code, etc., to identify the identity of the corresponding person, or identify the situation on the spot.
  • the rescue data determining module 102 determines the rescue data information of the rescued object at the rescue site based on the scene image. For example, the wearable rescue device performs image recognition on the scene image, determines the relevant information in the scene image (such as the identity information of one or more related persons), and determines the rescue data information of the rescued object based on the relevant information determined by the recognition .
  • the rescue data information includes, but is not limited to, the rescued object's medical data information (such as medical history, recent medical advice, etc.), and the rescue precaution information of the rescued object (such as whether Installed with a pacemaker, whether there are limbs and hands that are inconvenient to perform certain rescue methods, etc.), the allergy information of the rescued object, etc., are used to provide safe and rapid rescue for the rescued object.
  • the rescue data information includes, but is not limited to, the rescued object's medical data information (such as medical history, recent medical advice, etc.), and the rescue precaution information of the rescued object (such as whether Installed with a pacemaker, whether there are limbs and hands that are inconvenient to perform certain rescue methods, etc.), the allergy information of the rescued object, etc., are used to provide safe and rapid rescue for the rescued object.
  • the rescue instruction providing module 103 obtains the rescue instruction corresponding to the rescue data information for performing on-site rescue of the rescued object, and provides the rescue instruction to the corresponding on-site rescue user.
  • the above-mentioned rescue instruction is generated based on the above-mentioned rescue data information. For example, when the above-mentioned rescue data information includes allergy information, the above-mentioned rescue instruction includes an instruction to avoid the use of a certain drug, or includes an instruction to use alternative medicines or alternative rescue methods for rescue. .
  • the rescue data information is determined based on the identity information of the rescued object.
  • the aforementioned rescue data determining module 102 includes an identity information determining unit 1021 and a rescue data determining unit 1022 (none of them are shown).
  • the identity information determining unit 1021 performs image recognition on the on-site image to determine the identity information of the rescued object at the rescue site, for example, a wearable rescue device recognizes locally, or transmits the on-site image to the cloud for recognition by the cloud , So as to obtain the corresponding identity information of the rescued object, including but not limited to face recognition, iris recognition, hand cards (for example, worn on the athlete's wrist, and record the athlete's related information in the form of text, bar code, etc.) Image recognition, etc.
  • the rescue data determining unit 1022 determines the rescue data information of the rescued object based on the identity information, such as requesting the rescue data information from a relevant web server based on the above identity information.
  • the web server is managed by a medical institution. Provide, so as to be able to provide rescuers with accurate rescue information.
  • the identity information determining unit 1021 determines the event information corresponding to the live image based on the live image, and based on the The scene image information and the recognition result of the image recognition on the scene image determine the identity information of the rescued object at the rescue scene.
  • the event information includes, but is not limited to, the time, location, and number of matches the rescued object participated in, which is used to narrow the search range, thereby improving the processing efficiency of the system.
  • the aforementioned device further includes a first auxiliary rescue information providing module 104 (not shown).
  • the first auxiliary rescue information providing module 104 determines the language information of the rescued object according to the identity information, and provides auxiliary rescue information to the rescued object based on the language information.
  • the language information is used to characterize the language spoken by the rescued person;
  • the auxiliary rescue information includes, but is not limited to, one-way or two-way translation (translated content is provided to the rescued person), inquiry or comfort, etc., in words , Image, sound and other forms.
  • translation operations include but are not limited to text-based and voice-based translation.
  • the translation operation is used to provide corresponding assistance to the rescued person who masters a specific language to facilitate communication with rescuers.
  • the translation process can be completed by the cloud server synchronously; for inquiries or comfort, the relevant materials can be stored locally in the wearable rescue device or the wearable rescue device can immediately request the cloud server.
  • the manner of determining the language information of the rescued person based on the identity information of the rescued person described above is only an example, and is not a limitation to the specific implementation of the application.
  • the identity information of the rescued cannot be accurately determined, in order to promote communication between the rescued and the rescued to improve rescue efficiency, it is still possible to consider determining the language information of the rescued object based on The language information provides auxiliary rescue information to the rescued object.
  • the above-mentioned equipment further includes a second auxiliary rescue information providing module 105 (not shown).
  • the second auxiliary rescue information providing module 105 determines at least any one of the following based on the scene image:
  • the wearable rescue device determines the language information of the rescued object based on one of the above-mentioned country information, location information, and ethnic information (or a combination of several of them), and then gives directions based on the language information.
  • the rescued object provides auxiliary rescue information.
  • the above-mentioned equipment further includes a third auxiliary rescue information providing module 106 (not shown).
  • the third auxiliary rescue information providing module 106 obtains the voice information of the rescued object, determines the language information of the rescued object based on the voice information, and then provides auxiliary rescue information to the rescued object based on the language information .
  • the rescued object can be provided with a preset voice prompt based on the language mastered by the rescued object to reduce the burden of the rescuer.
  • the first auxiliary rescue information providing module 104 provides the rescued object with voice information corresponding to a preset phrase selected by the corresponding on-site rescue user based on the language information.
  • the selection operations of the on-site rescue user can be performed based on hardware buttons or knobs, touch pads, touch screens, voice control, etc., and those skilled in the art should understand that these selection operations are only examples and not a limitation of the application. If other existing or possible future operation methods can be applied to this application, they are also included in the protection scope of this application and are included here by reference.
  • the aforementioned device further includes a real-time language conversion module 107 (not shown).
  • the real-time language conversion module 107 performs real-time language conversion on the rescued object based on the language information, and provides the converted voice information to the corresponding on-site rescue user.
  • a rescue operation may require the participation of multiple parties. For example, in an emergency, requiring the use of an Automated External Defibrillator (AED) and cardiopulmonary resuscitation, it is difficult for a single person to complete high-quality cardiopulmonary resuscitation. Recovery, so in order to improve the rescue success rate, other rescuers need to be notified to participate in this rescue operation.
  • the above-mentioned device further includes a cooperation request sending module 108 (not shown).
  • the cooperation request sending module 108 sends a cooperation request to at least one other wearable rescue device, where the cooperation request includes a cooperation instruction corresponding to the at least one other wearable rescue device, and the cooperation instruction is used for the corresponding rescue user Refer to perform corresponding operations to coordinate rescue.
  • the cooperation request sending module 108 determines the corresponding coordinated rescue operation sequence, and sends the cooperation request to at least one other wearable rescue device based on the coordinated rescue operation sequence, thereby completing the request to at least one other wearable rescue device.
  • the rescue instruction is determined based on the coordinated rescue operation sequence.
  • the system distributes a corresponding rescue instruction or cooperation instruction to each rescuer according to the coordinated rescue operation sequence.
  • the aforementioned collaborative rescue operation sequence includes several operation steps that different users need to perform, which can be manually generated by the user of the wearable rescue device, or automatically generated by the wearable rescue device based on preset content, or the wearable rescue device Obtained by the corresponding web server request.
  • certain operations are performed by the corresponding network device to expand the computing power of the wearable rescue device and facilitate the background command center follow up the dispatch in time, and facilitate other rescue users to participate in rescue and interact.
  • the above-mentioned rescue data determining module 102 further includes an on-site image sending unit 1023 and a rescue data receiving unit 1024 (neither shown).
  • the scene image sending unit 1023 sends the scene image to the corresponding network device; the rescue data receiving unit 1024 receives the rescue data information of the rescued object sent by the network device based on the scene image.
  • the process of obtaining rescue material information is completed by the network device and is similar to the process of obtaining the rescue data completed by the wearable rescue device described in the above embodiment.
  • the above-mentioned equipment further includes an auxiliary rescue information providing module 109 (not shown).
  • the auxiliary rescue information providing module 109 receives the auxiliary rescue information sent by the network device, and provides the auxiliary rescue information to the rescued object.
  • the network device determines the language information of the rescued object based on the identity information, and sends auxiliary rescue information to the wearable rescue device based on the language information, and the auxiliary rescue information is provided by the wearable rescue device to the rescued object .
  • the language information is used to characterize the language spoken by the rescued person;
  • the auxiliary rescue information includes, but is not limited to, one-way or two-way translation (translated content is provided to the rescued person), inquiry or comfort, etc., in words , Image, sound and other forms.
  • translation operations include but are not limited to text-based and voice-based translation.
  • the translation operation is used to provide corresponding assistance to the rescued person who masters a specific language to facilitate communication with rescuers.
  • the manner of determining the language information of the rescued person based on the identity information of the rescued person described above is only an example, and is not a limitation to the specific implementation of the application.
  • the identity information of the rescued cannot be accurately determined, in order to promote communication between the rescued and the rescued to improve rescue efficiency, it is still possible to consider determining the language information of the rescued object based on The language information provides auxiliary rescue information to the rescued object.
  • the network device first determines at least any one of the following based on the live image:
  • the network device determines the language information of the rescued object based on one of the above-mentioned country information, location information, and ethnic information (or a combination of several of them), and then sends the information to the rescued object based on the language information.
  • the rescued object provides auxiliary rescue information.
  • the auxiliary rescue information providing module 109 acquires and sends the voice information of the rescued object to the network device, receives the auxiliary rescue information sent by the network device based on the voice information, and sends it to the rescued object Provide the auxiliary rescue information.
  • the aforementioned device further includes a voice information providing module 110 (not shown).
  • the voice information providing module 110 collects and sends the real-time voice information of the rescued object to the network device, receives the converted voice information sent by the network device based on the real-time voice information, and provides all on-site rescue users Describe the converted voice information.
  • the rescue instructions provided to rescue users may be generated by network devices.
  • the rescue instruction providing module 103 receives the rescue instruction sent by the network device for on-site rescue of the rescued object, and provides the rescue instruction to the corresponding on-site rescue user, The rescue instruction corresponds to the rescue data information.
  • a network device 200 for generating rescue instructions is provided. For example, in some embodiments, certain operations, such as the operation of determining the rescue data information of the rescued object based on the scene image, are performed by the corresponding network device to expand the computing power of the wearable rescue device and facilitate the background
  • the command center follows up the dispatch in time, and it is convenient for other rescue users to participate in rescue and interact.
  • the network device 200 includes an on-site image receiving module 201, a rescue data determining module 202, and a rescue data sending module 203.
  • the scene image receiving module 201 receives the scene image of the rescue scene sent by the corresponding wearable rescue device.
  • the rescue data determining module 202 determines the rescue data information of the rescued object at the rescue site based on the scene image.
  • the rescue data sending module 203 sends the rescue data information to the wearable rescue device.
  • the process of obtaining rescue material information is completed by the network device and is similar to the process of obtaining the rescue data completed by the wearable rescue device described in the above embodiment.
  • the above-mentioned equipment further includes an auxiliary rescue information sending module 204 (not shown).
  • the auxiliary rescue information sending module 204 determines the language information of the rescue object, and sends corresponding auxiliary rescue information to the wearable rescue device based on the language information.
  • the network device determines the language information of the rescued object based on the identity information, and sends auxiliary rescue information to the wearable rescue device based on the language information, and the auxiliary rescue information is provided by the wearable rescue device to the rescued object .
  • the language information is used to characterize the language spoken by the rescued person;
  • the auxiliary rescue information includes, but is not limited to, one-way or two-way translation (translated content is provided to the rescued person), inquiry or comfort, etc., in words , Image, sound and other forms.
  • the translation operation includes but is not limited to text-based and voice-based translation.
  • the translation operation is used to provide corresponding assistance to the rescued person who masters a specific language to facilitate communication with rescuers.
  • the manner of determining the language information of the rescued person based on the identity information of the rescued person described above is only an example, and is not a limitation to the specific implementation of the application.
  • the identity information of the rescued cannot be accurately determined, in order to promote communication between the rescued and the rescued to improve rescue efficiency, it is still possible to consider determining the language information of the rescued object based on The language information provides auxiliary rescue information to the rescued object.
  • the network device first determines at least any one of the following based on the live image:
  • the network device determines the language information of the rescued object based on one of the above-mentioned country information, location information, and ethnic information (or a combination of several of them), and then sends the information to the rescued object based on the language information.
  • the rescued object provides auxiliary rescue information.
  • the auxiliary rescue information sending module 204 receives the voice information of the rescued object sent by the wearable rescue device, and determines the language information of the rescued object based on the voice information.
  • the aforementioned device further includes a voice conversion and sending module 205.
  • the voice conversion and sending module 205 receives the real-time voice information of the rescued object sent by the wearable rescue device, and sends the converted voice information to the wearable rescue device based on the real-time voice information.
  • the rescue instruction for the rescued object is not generated locally on the device used by the rescue user on the spot, but generated by the network device, which is conducive to the system or the command user to schedule multiple rescue users on the rescue site .
  • the aforementioned device further includes a rescue instruction sending module 206.
  • the rescue instruction sending module 206 determines a rescue instruction for on-site rescue of the rescued object based on the rescue data information, and sends the rescue instruction to the wearable rescue device.
  • a rescue operation may require the participation of multiple parties. For example, in an emergency, requiring the use of an Automated External Defibrillator (AED) and performing cardiopulmonary resuscitation, it is difficult for a single person to complete high-quality cardiopulmonary resuscitation. Therefore, in order to improve the rescue success rate, other rescuers need to be notified to participate in this rescue operation.
  • AED Automated External Defibrillator
  • the network device performs the following operations:
  • a rescue instruction for on-site rescue of the rescued object is determined, and a cooperation request corresponding to at least one other wearable rescue device, wherein the cooperation request includes the at least one Collaboration instructions corresponding to other wearable rescue equipment;
  • the rescue instruction is determined based on the coordinated rescue operation sequence. For example, the system distributes a corresponding rescue instruction or cooperation instruction to each rescuer according to the coordinated rescue operation sequence.
  • the aforementioned cooperative rescue operation sequence includes several operation steps that different users need to perform.
  • the present application also provides a computer-readable storage medium that stores computer code, and when the computer code is executed, the method described in any of the preceding items is executed.
  • the present application also provides a computer program product.
  • the computer program product is executed by a computer device, the method described in any of the preceding items is executed.
  • This application also provides a computer device, which includes:
  • One or more processors are One or more processors;
  • Memory used to store one or more computer programs
  • the one or more processors When the one or more computer programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any one of the preceding items.
  • Figure 6 shows an exemplary system that can be used to implement the various embodiments described in this application.
  • the system 1000 can be used as any wearable rescue device or network device in each of the described embodiments.
  • the system 1000 may include one or more computer-readable media with instructions (for example, system memory or NVM/storage device 1020) and be coupled with the one or more computer-readable media and configured to execute
  • the instructions are one or more processors (for example, the processor(s) 1005) that implement modules to perform the actions described in this application.
  • system control module 1010 may include any suitable interface controller to provide at least one of the processor(s) 1005 and/or any suitable device or component in communication with the system control module 1010 Any appropriate interface.
  • the system control module 1010 may include a memory controller module 1030 to provide an interface to the system memory 1015.
  • the memory controller module 1030 may be a hardware module, a software module, and/or a firmware module.
  • the system memory 1015 may be used to load and store data and/or instructions for the system 1000, for example.
  • the system memory 1015 may include any suitable volatile memory, such as a suitable DRAM.
  • the system memory 1015 may include a double data rate type quad synchronous dynamic random access memory (DDR4 SDRAM).
  • DDR4 SDRAM double data rate type quad synchronous dynamic random access memory
  • system control module 1010 may include one or more input/output (I/O) controllers to provide interfaces to the NVM/storage device 1020 and the communication interface(s) 1025.
  • I/O input/output
  • NVM/storage device 1020 may be used to store data and/or instructions.
  • NVM/storage device 1020 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more hard drives ( Hard Disk, HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives).
  • suitable non-volatile memory e.g., flash memory
  • suitable non-volatile storage device(s) e.g., one or more hard drives ( Hard Disk, HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives).
  • the NVM/storage device 1020 may include storage resources that are physically part of the device on which the system 1000 is installed, or it may be accessed by the device without necessarily being part of the device.
  • the NVM/storage device 1020 may be accessed via the communication interface(s) 1025 through the network.
  • the communication interface(s) 1025 may provide an interface for the system 1000 to communicate through one or more networks and/or with any other suitable devices.
  • the system 1000 can wirelessly communicate with one or more components of the wireless network according to any of one or more wireless network standards and/or protocols.
  • At least one of the processor(s) 1005 may be packaged with the logic of one or more controllers of the system control module 1010 (eg, the memory controller module 1030). For one embodiment, at least one of the processor(s) 1005 may be packaged with the logic of one or more controllers of the system control module 1010 to form a system in package (SiP). For one embodiment, at least one of the processor(s) 1005 may be integrated with the logic of one or more controllers of the system control module 1010 on the same mold. For one embodiment, at least one of the processor(s) 1005 may be integrated with the logic of one or more controllers of the system control module 1010 on the same mold to form a system on chip (SoC).
  • SoC system on chip
  • the system 1000 may be, but is not limited to, a server, a workstation, a desktop computing device, or a mobile computing device (for example, a laptop computing device, a handheld computing device, a tablet computer, a netbook, etc.).
  • the system 1000 may have more or fewer components and/or different architectures.
  • the system 1000 includes one or more cameras, keyboards, liquid crystal display (LCD) screens (including touchscreen displays), non-volatile memory ports, multiple antennas, graphics chips, application specific integrated circuits ( ASIC) and speakers.
  • LCD liquid crystal display
  • ASIC application specific integrated circuits
  • this application can be implemented in software and/or a combination of software and hardware, for example, it can be implemented by an application specific integrated circuit (ASIC), a general purpose computer or any other similar hardware device.
  • the software program of the present application may be executed by a processor to realize the steps or functions described above.
  • the software program (including related data structure) of the present application can be stored in a computer-readable recording medium, such as RAM memory, magnetic or optical drive or floppy disk and similar devices.
  • some steps or functions of the present application may be implemented by hardware, for example, as a circuit that cooperates with a processor to execute each step or function.
  • the computer program instructions in the computer-readable medium include but are not limited to source files, executable files, installation package files, etc.
  • the manner in which computer program instructions are executed by the computer includes but not Limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction before executing the corresponding post-installation program.
  • the computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
  • Communication media includes media by which communication signals containing, for example, computer-readable instructions, data structures, program modules, or other data are transmitted from one system to another system.
  • Communication media can include conductive transmission media (such as cables and wires (for example, optical fiber, coaxial, etc.)) and wireless (unguided transmission) media that can propagate energy waves, such as sound, electromagnetic, RF, microwave, and infrared .
  • Computer readable instructions, data structures, program modules or other data may be embodied as, for example, a modulated data signal in a wireless medium such as a carrier wave or similar mechanism such as embodied as part of spread spectrum technology.
  • modulated data signal refers to a signal whose one or more characteristics have been altered or set in such a way as to encode information in the signal. Modulation can be analog, digital or hybrid modulation techniques.
  • a computer-readable storage medium may include volatile, non-volatile, nonvolatile, and nonvolatile, and may be implemented in any method or technology for storing information such as computer-readable instructions, data structures, program modules or other data Removable and non-removable media.
  • computer-readable storage media include, but are not limited to, volatile memory, such as random access memory (RAM, DRAM, SRAM); and non-volatile memory, such as flash memory, various read-only memories (ROM, PROM, EPROM) , EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other currently known media or future developments that can be stored for computer systems Computer readable information/data used.
  • volatile memory such as random access memory (RAM, DRAM, SRAM
  • non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM) , EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other currently known media or future developments that can be stored for computer systems Computer readable information/data used.
  • volatile memory such as random access memory (RAM, DRAM,
  • an embodiment according to the present application includes a device including a memory for storing computer program instructions and a processor for executing the program instructions, wherein, when the computer program instructions are executed by the processor, trigger
  • the operation of the device is based on the aforementioned methods and/or technical solutions according to multiple embodiments of the present application.

Abstract

L'objectif de la présente invention est de fournir un procédé et un dispositif permettant de générer une instruction de sauvetage. Le procédé comprend les étapes suivantes : un dispositif de sauvetage portable transmet une image sur site à un dispositif de réseau correspondant ; le dispositif de réseau détermine des informations de données de sauvetage d'un sujet de sauvetage d'un site de sauvetage sur la base de l'image sur site et transmet les informations de données de sauvetage au dispositif de sauvetage portable ; et le dispositif de sauvetage portable acquiert une instruction de sauvetage correspondante pour effectuer un sauvetage sur site du sujet de sauvetage et fournit l'instruction de sauvetage à un utilisateur de sauvetage sur site correspondant.
PCT/CN2020/078405 2019-03-14 2020-03-09 Procédé et dispositif pour générer une instruction de sauvetage WO2020182096A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910193577.1 2019-03-14
CN201910193577.1A CN109908508A (zh) 2019-03-14 2019-03-14 一种用于生成救援指令的方法与设备

Publications (1)

Publication Number Publication Date
WO2020182096A1 true WO2020182096A1 (fr) 2020-09-17

Family

ID=66964811

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/078405 WO2020182096A1 (fr) 2019-03-14 2020-03-09 Procédé et dispositif pour générer une instruction de sauvetage

Country Status (2)

Country Link
CN (1) CN109908508A (fr)
WO (1) WO2020182096A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109908508A (zh) * 2019-03-14 2019-06-21 上海救要救信息科技有限公司 一种用于生成救援指令的方法与设备
CN110681082B (zh) * 2019-09-29 2021-08-20 北京空间技术研制试验中心 用于密闭空间的自动化防爆与智能应急救生一体化系统
CN111739065A (zh) * 2020-06-29 2020-10-02 上海出版印刷高等专科学校 基于数码印花的目标识别方法、系统、电子设备和介质
CN114915660A (zh) * 2021-02-09 2022-08-16 上海救要救信息科技有限公司 一种用于协同救援的方法与设备

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8558883B2 (en) * 2007-07-27 2013-10-15 Sportvision, Inc. Providing graphics in images depicting aerodynamic flows and forces
CN103501336A (zh) * 2013-09-30 2014-01-08 东北大学 一种基于物联网的救援人员安全防护监控系统及方法
CN103578223A (zh) * 2013-11-06 2014-02-12 深圳市蓝港反光材料有限公司 臂戴式多功能救援器
CN204028658U (zh) * 2014-06-27 2014-12-17 王哲龙 一种智能救援头盔及系统
US20150338912A1 (en) * 2014-05-23 2015-11-26 Seoul National University R&Db Foundation Memory aid method using audio/video data
CN105915593A (zh) * 2016-04-11 2016-08-31 上海救要救信息科技有限公司 用于处理救援请求的方法与设备
CN106110531A (zh) * 2016-08-02 2016-11-16 昆明理工大学 一种消防救援装置
CN107071050A (zh) * 2017-05-15 2017-08-18 严治 一种紧急呼援医疗救助系统
CN107610414A (zh) * 2017-08-28 2018-01-19 杜学超 一种紧急报警求助系统及方法
CN107924637A (zh) * 2015-07-01 2018-04-17 爱思应用认知工程有限公司 用于认知训练的系统和方法
CN108447232A (zh) * 2018-05-18 2018-08-24 高辉 一种报警求助方法、报警求助平台、存储介质及系统
CN109062482A (zh) * 2018-07-26 2018-12-21 百度在线网络技术(北京)有限公司 人机交互控制方法、装置、服务设备及存储介质
CN109908508A (zh) * 2019-03-14 2019-06-21 上海救要救信息科技有限公司 一种用于生成救援指令的方法与设备

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8558883B2 (en) * 2007-07-27 2013-10-15 Sportvision, Inc. Providing graphics in images depicting aerodynamic flows and forces
CN103501336A (zh) * 2013-09-30 2014-01-08 东北大学 一种基于物联网的救援人员安全防护监控系统及方法
CN103578223A (zh) * 2013-11-06 2014-02-12 深圳市蓝港反光材料有限公司 臂戴式多功能救援器
US20150338912A1 (en) * 2014-05-23 2015-11-26 Seoul National University R&Db Foundation Memory aid method using audio/video data
CN204028658U (zh) * 2014-06-27 2014-12-17 王哲龙 一种智能救援头盔及系统
CN107924637A (zh) * 2015-07-01 2018-04-17 爱思应用认知工程有限公司 用于认知训练的系统和方法
CN105915593A (zh) * 2016-04-11 2016-08-31 上海救要救信息科技有限公司 用于处理救援请求的方法与设备
CN106110531A (zh) * 2016-08-02 2016-11-16 昆明理工大学 一种消防救援装置
CN107071050A (zh) * 2017-05-15 2017-08-18 严治 一种紧急呼援医疗救助系统
CN107610414A (zh) * 2017-08-28 2018-01-19 杜学超 一种紧急报警求助系统及方法
CN108447232A (zh) * 2018-05-18 2018-08-24 高辉 一种报警求助方法、报警求助平台、存储介质及系统
CN109062482A (zh) * 2018-07-26 2018-12-21 百度在线网络技术(北京)有限公司 人机交互控制方法、装置、服务设备及存储介质
CN109908508A (zh) * 2019-03-14 2019-06-21 上海救要救信息科技有限公司 一种用于生成救援指令的方法与设备

Also Published As

Publication number Publication date
CN109908508A (zh) 2019-06-21

Similar Documents

Publication Publication Date Title
WO2020182096A1 (fr) Procédé et dispositif pour générer une instruction de sauvetage
US11482311B2 (en) Automated clinical documentation system and method
US11494735B2 (en) Automated clinical documentation system and method
WO2020187075A1 (fr) Procédé et dispositif permettant de fournir des informations de sauvetage à un sauveteur
WO2020211806A1 (fr) Procédé et dispositif permettant de fournir une invite vocale de sauvetage
WO2020156524A1 (fr) Procédé et dispositif de détermination d'informations d'équipe de sauvetage
CN110087205B (zh) 一种用于获取被救援者的基础参数的方法与设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20769353

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20769353

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 04.02.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20769353

Country of ref document: EP

Kind code of ref document: A1