CN114735165B - Intelligent underwater lifesaving system and drowning detection and rescue method - Google Patents

Intelligent underwater lifesaving system and drowning detection and rescue method Download PDF

Info

Publication number
CN114735165B
CN114735165B CN202210228026.6A CN202210228026A CN114735165B CN 114735165 B CN114735165 B CN 114735165B CN 202210228026 A CN202210228026 A CN 202210228026A CN 114735165 B CN114735165 B CN 114735165B
Authority
CN
China
Prior art keywords
underwater robot
rescue
underwater
drowning
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210228026.6A
Other languages
Chinese (zh)
Other versions
CN114735165A (en
Inventor
李雪桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges University CTGU
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN202210228026.6A priority Critical patent/CN114735165B/en
Publication of CN114735165A publication Critical patent/CN114735165A/en
Application granted granted Critical
Publication of CN114735165B publication Critical patent/CN114735165B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63CLAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
    • B63C9/00Life-saving in water
    • B63C9/01Air-sea rescue devices, i.e. equipment carried by, and capable of being dropped from, an aircraft
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63CLAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
    • B63C9/00Life-saving in water
    • B63C9/02Lifeboats, life-rafts or the like, specially adapted for life-saving
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/001Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/38Arrangement of visual or electronic watch equipment, e.g. of periscopes, of radar
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D1/00Dropping, ejecting, releasing, or receiving articles, liquids, or the like, in flight
    • B64D1/02Dropping, ejecting, or releasing articles
    • B64D1/08Dropping, ejecting, or releasing articles the articles being load-carrying devices
    • B64D1/12Releasing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/08Alarms for ensuring the safety of persons responsive to the presence of persons in a body of water, e.g. a swimming pool; responsive to an abnormal condition of a body of water
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63CLAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
    • B63C9/00Life-saving in water
    • B63C2009/0017Life-saving in water characterised by making use of satellite radio beacon positioning systems, e.g. the Global Positioning System [GPS]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/001Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations
    • B63G2008/002Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations unmanned
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Ocean & Marine Engineering (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Medical Informatics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Alarm Systems (AREA)

Abstract

The invention relates to an intelligent underwater lifesaving system which comprises an underwater robot, an unmanned aerial vehicle, a server and an intelligent bracelet. The underwater robot comprises a main body, a human body supporting seat, a life belt and life fish, wherein a first turbine for providing ascending and descending power in water is arranged on the main body, and a second turbine for providing advancing, retreating and steering power in water is arranged at the tail of the main body. The intelligent bracelet comprises a second microprocessor, and a heart rate sensor, a water pressure sensor, a third GPS module, a fourth wireless communication module and a fourth geomagnetic sensor which are respectively connected with the second microprocessor. The invention provides an underwater robot for underwater rescue, which is characterized in that a life belt is used for winding drowned people to lift the head of the person out of the water surface, and compared with manual rescue, the underwater robot has high rescue efficiency and good practicability; the invention realizes the integration of an underwater robot, an unmanned aerial vehicle, an intelligent bracelet and a server, and the server carries out real-time drowning detection by utilizing images, physiological data provided by the intelligent bracelet and the like, thereby finding the drowning phenomenon as early as possible and carrying out rescue in time.

Description

Intelligent underwater lifesaving system and drowning detection and rescue method
Technical Field
The invention belongs to the field of water lifesaving, and particularly relates to an intelligent underwater lifesaving system and a drowning detection and rescue method.
Background
The water rescue has very high requirements on timeliness and stability, and relates to a plurality of leading-edge technologies in a wide subject field such as computers, automatic control, sensors and the like.
From the current research situation at home and abroad of water rescue, the research results of the maneuvering rescue of the unmanned aerial vehicle and the underwater robot are relatively few, most of the maneuvering rescue is still in the stage of primary research and test, and the comprehensive technical level and the service level are low. Aiming at different water area swimming scenes with dense people and sparse people, the overwater rescue strategy in the prior art only realizes how to detect and position, wherein how to realize quick maneuvering rescue, quick rescue service providing is not well realized, and the phenomenon that rescue service cannot be provided in time because people are aware of drowning exists; meanwhile, the application of rescue technology compatible with open water areas and indoor swimming venues and the practicability of rescue are lacked. The intelligent bracelet and the rescue unmanned aerial vehicle are not fused, so that the existing domestic and foreign water life-saving system is always incomplete, a large gap exists in practical application, and the comprehensive technical level and the service level of the rescue scheme are low.
Disclosure of Invention
The invention aims to solve the problems and provides an intelligent underwater lifesaving system and a drowning person rescue method, wherein an underwater robot for performing underwater autonomous rescue on drowning persons is designed and provided, the underwater robot, an unmanned aerial vehicle, an intelligent bracelet and a server are integrated, and the underwater robot and the intelligent bracelet are used for performing indoor and outdoor accurate positioning by utilizing the complementation of a geomagnetic positioning technology and a GPS positioning technology; the server on the one hand shoots the image of gathering in real time according to underwater robot or underwater camera, utilize the gesture of the human body in the neural network model discernment image, judge whether there is drowned phenomenon according to human gesture, on the other hand is according to the aquatic personnel's of intelligent bracelet real-time collection rhythm of the heart, degree of depth and X, Y, Z axle acceleration under water, judge whether drowned phenomenon of personnel appears through machine learning algorithm real-time, drowned phenomenon through two kinds of mode real-time detection, discover drowned personnel as early as possible, avoid the rescue untimely lead to the casualties.
The invention has the technical scheme that the intelligent underwater lifesaving system comprises an underwater robot, wherein the underwater robot comprises a main body, a human body supporting seat, a lifesaving belt and lifesaving fish, and the front end of the lifesaving belt is connected with the lifesaving fish; the main body is internally provided with a controller, a first wireless communication module and a first GPS module which are connected with the controller, the head part of the main body is provided with a camera, and the camera is connected with the controller through a data line; the main body is provided with a first turbine for providing ascending and descending power in water, the tail part of the main body is provided with a second turbine for providing advancing, backing and steering power in water, and the control ends of the first turbine and the second turbine are respectively connected with the controller; the tail end of the life belt is fixedly connected with the side part of the main body.
The lifesaving fish comprises a fish-imitating body, a bionic fin and a propeller thruster arranged at the tail part of the fish-imitating body, wherein the rear end of the propeller thruster is provided with a traction part used for connecting and dragging a lifesaving belt; the fish-imitating body is internally provided with a first microprocessor, a second wireless communication module and a second geomagnetic sensor which are respectively electrically connected with the first microprocessor, and the control end of a driving motor of the propeller thruster is connected with the control signal output end of the first microprocessor.
Furthermore, the longitudinal axis of the bionic fin is connected with the output shaft of the steering engine, and the control end of the steering engine is connected with the control signal output end of the first microprocessor. Under the driving of the steering engine, the advancing direction of the rescue fish is controlled by adjusting the angle between the bionic fin and the central line of the propeller.
Preferably, the underwater robot further comprises a first geomagnetic sensor, and the first geomagnetic sensor is connected with the controller through a data line.
Preferably, the system further comprises an unmanned aerial vehicle, a connecting rod used for being connected with the underwater robot is arranged below a cantilever of the unmanned aerial vehicle, an electric hydraulic clamp is arranged at the connecting end of the connecting rod and the underwater robot, the electric hydraulic clamp is matched with a connecting hole at the top of the underwater robot, and the control end of the electric hydraulic clamp is connected with a control unit of the unmanned aerial vehicle; the unmanned aerial vehicle also comprises a second GPS module and a second wireless communication module, and the second GPS module and the second wireless communication module are respectively connected with the control unit of the unmanned aerial vehicle.
Preferably, the system further comprises an intelligent bracelet, wherein the intelligent bracelet comprises a single chip microcomputer and a heart rate sensor, a water pressure sensor, a third GPS module, a third wireless communication module and a second geomagnetic sensor which are connected with the single chip microcomputer respectively.
Preferably, the system further comprises a server, the server is in communication connection with the underwater robot, the unmanned aerial vehicle and the smart bracelet respectively, and the server detects whether the human body in the image drowns according to the images provided by the underwater robot and the unmanned aerial vehicle; the server carries out drowning detection according to the heart rate, the underwater depth and the triaxial acceleration data of the person collected by the intelligent bracelet.
Furthermore, the underwater robot also comprises a target recognition module which runs on a processor of the controller, the target recognition module recognizes a human body in water according to an image shot by the camera, and judges whether the human body is a rescue target according to the position and the image characteristics of the human body.
The human body drowning detection method based on the human body image in water comprises the following steps:
step 1: shooting to obtain an image of a human body in water;
step 2: carrying out human body target detection on the image by using a YOLOv5s model to obtain a human body positioning frame;
and step 3: recognizing the human body posture in the human body positioning frame by adopting an Alphapos model to obtain a bone key point and determine a coordinate of the bone key point;
and 4, step 4: calculating to obtain the body posture characteristics of the human body according to the coordinates of the key points of the human skeleton;
and 5: judging whether drowning occurs according to the body state characteristics of the human body, judging the length of drowning time if drowning occurs, and giving an alarm. A human body drowning detection method based on an intelligent bracelet comprises the following steps:
s1: acquiring the heart rate and the underwater depth of a normal swimming person and a simulated drowning person in water and the X, Y, Z axial acceleration by using the intelligent wristband respectively;
s2: constructing a drowning detection model by adopting a support vector machine, and training the drowning detection model by using the data obtained in the step S1 as the input of the drowning detection model;
s3: acquiring the heart rate and the underwater depth of a human body to be detected and the X, Y, Z axial acceleration of the human body to be detected in real time by using an intelligent bracelet;
s4: the data acquired in the step S3 is used as the input of a drowning detection model to detect the drowning phenomenon in real time, if the drowning phenomenon is detected, the step S5 is executed, otherwise, the step S3 is executed;
s5: and acquiring the position of the drowned human body and sending out a drowning alarm.
The drowning person rescue method comprises the following steps:
1) Detecting drowning of a human body in water, determining the position of a drowning person when the drowning phenomenon of the human body is detected, and executing the step 2), otherwise, repeatedly executing the step 1);
2) Throwing an underwater robot at a rescue place according to the position of drowned people;
3) Controlling the underwater robot to travel to the position of drowned people in water, collecting images of the people at the position, and judging and confirming whether the underwater robot is a rescue target;
4) Controlling the underwater robot to enable a human body supporting seat of the underwater robot to be aligned with the back of a drowning person;
5) Controlling the rescue fish to draw the rescue belt to surround the body of the drowned person; after the life-saving fish pulls the life-saving belt to surround the drowned human body, the life-saving fish is controlled to perform circular motion around the human body supporting seat, and the life-saving belt is continuously tightly wound at the root of the human body supporting seat; after the life belt is wound, the life-saving fish is controlled to swim back to the containing chamber, and the life belt is inflated;
6) And controlling the underwater robot to drive to the water surface to enable the head of a drowning person to be exposed out of the water surface.
Compared with the prior art, the invention has the beneficial effects that:
1) The invention provides an underwater robot for performing underwater rescue on drowned people, which is provided with a human body supporting seat and a life belt, can be automatically lifted and turned in water, and when the drowned people are rescued, the drowned people are wound on the human body supporting seat by the life belt to lift the head of the human body out of the water surface, so that compared with manual rescue, the underwater robot has high rescue efficiency and good practicability, and can avoid unnecessary damage to the drowned human body during rescue;
2) The underwater robot comprises the lifesaving fish, when drowning people are rescued, the lifesaving fish pulls the lifesaving belt to complete the surrounding of the body of the drowning people, the independence of the lifesaving fish is good, the head of the drowning people is conveniently lifted away from the water surface after the body of the drowning people is quickly wound on a human body supporting seat of the underwater robot, and the drowning people can be rescued in time;
3) The invention realizes the integration of an underwater robot, an unmanned aerial vehicle, an intelligent bracelet and a server, wherein the server on one hand shoots an acquired image in real time according to the underwater robot, identifies the posture of a human body in the image and judges whether the drowning phenomenon exists or not, on the other hand, judges whether the drowning phenomenon occurs or not in real time according to the heart rate, the underwater depth and the X, Y, Z axial acceleration of personnel in water acquired by the intelligent bracelet in real time, detects the drowning phenomenon in real time through two modes, finds the drowning personnel in time and improves the efficiency of drowning inspection and personnel rescue; the server is convenient for the underwater robot to quickly reach the position of drowning personnel for rescue through the positioning of the intelligent bracelet of the drowning personnel; after the unmanned aerial vehicle flies to the position of drowned personnel, the underwater robot is thrown in the air, and the rescue efficiency is further improved;
4) The invention utilizes the geomagnetic positioning technology to make up the defect that the indoor GPS positioning signal is weak, realizes accurate positioning of the drowning person in indoor water, and is convenient for timely rescue;
5) According to the method, a YOLOv5s model is used for detecting the human body target of the image, after a human body positioning frame is obtained, an improved Alphapos model is used for identifying the human body posture in the human body positioning frame, skeleton key points are obtained, coordinates of the skeleton key points are determined, human body posture characteristics are obtained through calculation, the real-time judgment of drowning phenomenon of personnel is achieved, and the drowning detection accuracy rate is high;
6) According to the drowning detection method, a support vector machine is utilized, whether the drowning phenomenon of people occurs or not is judged in real time according to the heart rate and the underwater depth of the people in water and the X, Y, Z axial acceleration collected by the intelligent wristband, the drowning detection real-time performance is good, and the accuracy rate is high;
7) The rescue method has high rescue success rate and good protection for drowned people in the rescue process.
Drawings
The invention is further illustrated by the following figures and examples.
Fig. 1 is a schematic structural diagram of an intelligent underwater lifesaving system in an embodiment of the invention.
Fig. 2 is a schematic view of an underwater robot of an embodiment of the present invention.
Fig. 3 is a schematic view of a human body support base of the underwater robot according to the embodiment of the present invention.
Fig. 4 is a schematic structural view of a life-saving fish according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of an unmanned aerial vehicle according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of the YOLOv5s model.
Fig. 7 is a schematic diagram of an improved YOLOv5s model according to an embodiment of the present invention.
FIG. 8 is a schematic diagram of the Alphapos model according to an embodiment of the invention.
FIG. 9a is a schematic diagram of key points of human bones according to an embodiment of the present invention.
Fig. 9b is a schematic diagram of the underwater human skeleton key points according to the embodiment of the invention.
Fig. 10 is a schematic flow chart of a human drowning detection method according to an embodiment of the present invention.
Detailed Description
As shown in fig. 1, the intelligent underwater lifesaving system of the embodiment includes a server, and an underwater robot 100, an unmanned aerial vehicle 200, an intelligent bracelet 300 and a monitoring terminal which are in communication connection with the server respectively. The server detects whether the human body in the image is drowned according to the images provided by the underwater robot 100 and the unmanned aerial vehicle 200.
As shown in fig. 2 to 4, the underwater robot 100 includes a main body 101, a human body support base 105, a life belt 103 and a life-saving fish 102, wherein the front end of the life belt 103 is connected with the life-saving fish 102; a controller, a first wireless communication module, a first GPS module and a first geomagnetic sensor which are connected with the controller are arranged in the main body 101, a camera 104 is arranged at the head of the main body, and the camera 104 is connected with the controller through a data line; the main body 101 is provided with a first turbine 106 for providing ascending and descending power in water, the tail part of the main body 101 is provided with a second turbine 107 for providing advancing, backing and steering power in water, and the control ends of the first turbine 106 and the second turbine 107 are respectively connected with the controller. The two sides of the human body supporting seat 105 are respectively provided with a groove 111 which is convenient for winding the life belt.
As shown in fig. 3, a high-pressure gas tank 108 for inflating the survival belt 103 is provided on the main body 101 side of the underwater robot 100. The other side of the main body 101 is provided with a receiving chamber 109 for receiving life belts and life fish. The life belt 103 is a hollow airtight belt-shaped structure, the inflation end of the life belt 103 is connected with the high-pressure air tank 108 through an electromagnetic valve, and the control end of the electromagnetic valve is connected with the control signal output end of the controller. In the embodiment, the high-pressure gas tank 108 stores liquid nitrogen gas therein.
As shown in fig. 4, the life-saving fish 102 comprises a fish-imitating body 1021, a bionic fin 1024 and a propeller 1022 arranged at the tail of the fish-imitating body, wherein the rear end of the propeller 1022 is provided with a traction part 1023 for connecting and dragging a life-saving belt; a first microprocessor, a second wireless communication module and a second geomagnetic sensor which are electrically connected with the first microprocessor are arranged in the fish-imitating body 1021, and a control end of a driving motor of the propeller 1022 is connected with a control signal output end of the first microprocessor.
The longitudinal axis of the bionic fin 1024 is connected with the output shaft of the steering engine, and the control end of the steering engine is connected with the control signal output end of the first microprocessor. Under the driving of the steering engine, the advancing direction of the life-saving fish 102 is controlled by adjusting the angle between the bionic fin 1024 and the central line of the propeller 1022.
The unmanned aerial vehicle, underwater robot, intelligent bracelet of embodiment all adopt the mode that earth induction location and GPS location combined together, utilize geomagnetic sensor to carry out the accurate positioning in the environment that GPS signals are weak such as indoor swimming pool.
The smart band 300 comprises a second microprocessor, a heart rate sensor, a water pressure sensor, a third GPS module, a fourth wireless communication module and a third geomagnetic sensor which are respectively connected with the second microprocessor, and the smart band 300 is in communication connection with a server. The smart band 300 obtains the underwater depth of the smart band by using real-time data output by the water pressure sensor.
As shown in fig. 5, a connecting rod 201 for connecting with the underwater robot is arranged below a cantilever of the unmanned aerial vehicle 200, an electric hydraulic clamp 202 is arranged at a connecting end of the connecting rod 201 and the underwater robot, a bite head of the electric hydraulic clamp 202 is matched with a connecting hole 111 at the top of the underwater robot, and a control end of the electric hydraulic clamp 202 is connected with a control unit of the unmanned aerial vehicle; the unmanned aerial vehicle 200 further comprises a second GPS module and a third wireless communication module, and the second GPS module and the third wireless communication module are respectively connected with the control unit of the unmanned aerial vehicle.
In the embodiment, when the unmanned aerial vehicle 200 transports the underwater robot 100 in the air, the control unit respectively controls the engagement heads of the electric hydraulic tongs 202 at the bottom ends of the connecting rods below the 4 cantilevers of the unmanned aerial vehicle to engage with the connecting hole 111 at the top of the underwater robot, so that the underwater robot 100 is connected with the unmanned aerial vehicle 200, and the unmanned aerial vehicle 200 carries the underwater robot 100 to fly. When the unmanned aerial vehicle 200 puts in the underwater robot 100 in the air, the control unit respectively controls the bite heads of the electric hydraulic tongs 202 at the bottom ends of the connecting rods below the 4 cantilevers of the unmanned aerial vehicle to open, and the underwater robot 100 is separated from the unmanned aerial vehicle 200 under the action of gravity to complete operation.
The second wireless communication module of the rescue fish 102 is in communication connection with the first wireless communication module of the underwater robot 100; the first wireless communication module of the underwater robot 100, the third wireless communication module of the unmanned aerial vehicle 200, and the fourth wireless communication module of the smart band 300 are in communication connection with the server through wireless routers, respectively.
The postures of a person in water can be roughly classified into two types: one is parallel to the water surface, one is perpendicular to the water surface, and the other can be categorized as perpendicular to the water surface. Generally, people drown can be classified into a second type, the posture is vertical to the water surface, at the moment, people's consciousness still exists and can still last for a longer time, and the drowning characteristic of the people is easier to obtain. Therefore, when the person in water just drowns, the reliable and quick detection of the drowned state has more practical significance. As shown in fig. 9a, 18 skeletal key points of the human body are extracted, and the posture of the human body is determined by the relative positions of the 18 skeletal key points.
When drowning, the posture characteristic of human body includes:
1) Drowned characteristic under water: the feet swing ceaselessly, the body floats up and down, and the position coordinates of the feet change along with the time;
2) Final drowning characteristics: the human body is parallel to the water surface and floats on the water surface, and whether the key skeleton points change relatively or not can be judged along with the time so as to draw a conclusion whether the final drowning characteristic appears or not.
When a person is just drowned, the body of the person tends to face upward, the legs of the person under water float up and down, the whole body of the person is not easily recognized from an image shot under the water surface, and the change of the relative position of the skeletal key points with time can be detected by taking the skeletal key points of the lower half of the person shown in fig. 9b as a core, so that the state of drowning under water can be detected.
As shown in fig. 10, the method for detecting human drowning based on the human body image in water includes the following steps:
step 1: shooting to obtain an image of a human body in water;
step 2: carrying out human body target detection on the image by utilizing an improved YOLOv5s model to obtain a human body positioning frame;
as shown in fig. 6, the network model of YOLOv5s includes a Backbone network backhaul, a multi-scale feature fusion module Neck, and a Prediction end Prediction.
The YOLOv5s of the embodiment reconstructs a high-resolution image in the first layer Focus of the backhaul network, stacks four points around each pixel point to obtain a low-resolution image, improves the visual field of each point, and reduces the loss of original information, so as to reduce the amount of calculation and increase the calculation speed. The third layer C3/BottleNeckCSP of the Backbone network comprises a Bottleneck part and a CSP part, and the aim is to improve the learning capability of the convolutional neural network. And after the SPP module of the ninth layer of the Backbone network adopts 5/9/13 maximum pooling, concat fusion is carried out, and the visual field of each point is improved. The multi-scale feature fusion module Neck of the YOLOv5s of the embodiment adopts Mask R-CNN and FPN frameworks to help pixels convey strong positioning features, and the network feature fusion capability can be enhanced by using the Mask R-CNN and the FPN frameworks simultaneously.
The Prediction module of yollov 5s of the example contains a loss function Bounding box and a non-maximum Suppression (NMS). The parameter iou-thres is added into yolov5s of the embodiment, so that the problem of non-overlapping boundary frames is effectively solved, the value is set to be 0.45, the effect is best, and the regression speed and precision of the prediction frame are effectively improved. According to the embodiment, the YOLOv5s uses NMS in the human body detection positioning stage, the recognition capability of complex environments such as light shadow, coverage and the like is enhanced, and an optimal target detection frame is obtained.
As shown in fig. 7, in the improved YOLOv5s model of the embodiment, based on the YOLOv5s model, convolution weight coefficients are added to both the convolutional layer of the backhaul network and the convolutional layer of the head network, upsampling coefficients are added to both the upsampling layer of the backhaul network and the upsampling layer of the head network, the number of bottleeckcsp modules is set to (3,6,6,3), and the number of categories nc is set to 1.
The improved target detection model YOLOv5s of the embodiment has model parameter number of 8.1M, the model parameter number of YOLOv5s of the prior art is 7.1M, and M represents a million.
And step 3: recognizing the human body posture in the human body positioning frame by adopting an Alphapos model to obtain a bone key point and determine a coordinate of the bone key point;
as shown in fig. 8, the Alphapose model of the embodiment identifies Multi-Person gestures according to the detection of the body location boxes of the image by using a Regional Multi-Person gesture Estimation Network (RMPE), where the RMPE includes a Space Transformation Network (STN), a Single Person gesture Estimation (SPPE), a Spatial De-Transformer Network (SDTN), and a gesture Non-Maximum suppressor (PPNMS) connected in sequence.
The STN is used for segmenting and extracting a single human body positioning frame from the images of the plurality of human body positioning frames so as to further perform single posture recognition; the SDTN is used for converting and restoring the detected and recognized human posture into an original image, the SDTN realizes a reverse process of the STN, and the STN and the SDTN form a Symmetric Space Transformation Network (SSTN). The SPPE is used for detecting and identifying human postures according to the human body positioning frame. The SSTN and SPPE combination of the embodiment realizes the detection and identification of the postures of a plurality of persons in the image. The PPNMS is used for eliminating redundant attitude estimation, and deleting redundant detection frames by defining the distance calculation similarity degree of the attitude, so that the accuracy of human attitude detection and identification is improved.
In the process of training RMPE, a Pipeline Parallel to SSTN is added, and the Parallel Pipeline comprises a position-Guided samples Generator (PGPG) and a Parallel SPPE (Parallel SPPE). The PGPG generates a sample of the human body positioning frame which obeys specific distribution according to the determined real posture and the human body positioning frame in the training sample, and increases the sample with inaccurate human body positioning frame so as to train and improve the adaptability of the SSTN and SPPE combined posture detection model of the embodiment to the inaccurate human body positioning frame. In order to improve the accuracy of human posture detection and recognition, the posture mark output by the SPPE is centered. The parallel SPPE and the SPPE combined with the SSTN in the posture detection model share the STN, the posture mark of the parallel SPPE is compared with the real centered human posture of the training sample in the training process, the deviation is calculated, the deviation is reversely transmitted to the STN so as to train the STN, the STN can be focused in a correct area, and a high-accuracy human body area is extracted.
RMPE of the examples reference is made to the paper "RMPE: RMPE model disclosed by Regional Multi-Person Pose Estimation.
In the embodiment, the human body target in the image is detected by using the alphaphase model, so that 18 key points of the human body skeleton shown in fig. 9a are obtained, and the coordinates of the key points in the image are obtained.
And 4, step 4: calculating to obtain human body posture characteristics according to the coordinates of key points of human skeleton, judging whether drowning occurs according to the human body posture characteristics, judging the length of drowning time if drowning occurs, and giving an alarm;
step 4.1: respectively calculating the linear velocity V of the key points (10) and (13) of the bones corresponding to the feet of the human body 10 、V 13 And the linear velocity of the key points (11) and (13) of the skeleton is used for calculating to obtain the velocity V of the single leg of the human body 11-13 (ii) a Calculating linear velocity V of human chest gravity center O O
In an embodiment, the linear velocity is calculated from image frames with time intervals in consecutive M-frame images. The position of the gravity center O of the chest of the human body is calculated according to the coordinates of the key points (1), (8) and (11) of the bones.
Step 4.2: calculating the linear velocity V of the human foot 10 、V 13 Linear velocity V of human body chest gravity center O O Preliminarily judging whether the drowning phenomenon occurs or not according to the proportion;
the formula for judging drowning is as follows:
Figure BDA0003537067920000081
in the formula, lambda represents a set threshold value, and alpha and beta respectively represent the upper limit and the lower limit of the average linear velocity;
if V 10 、V 13 、V O 、V 11-13 If the formula (I) is satisfied, primarily judging that the underwater drowning is performed, and ending; if V 10 、V 13 、V O 、V 11-13 If the formula (I) is not satisfied, executing the step 4.3;
step 4.3: calculating the included angle theta between the perpendicular bisector of the upper half of the human body and the horizontal plane a If theta 0 <θ a <θ 1 Angle of inclination theta a Is continuously kept in the range for a time T, and a < V O If < b, the drowning is judged to be finally done, and the operation is finished.
In the formula [ theta ] 0 Represents the minimum value of the included angle between the perpendicular bisector of the upper half of the drowned human body and the horizontal plane, theta 1 The maximum value of an included angle between a perpendicular bisector of the upper half of the drowned human body and a horizontal plane is shown, a represents a lower threshold value of the linear velocity, and b represents an upper threshold value of the linear velocity; otherwise, executing step 4.4;
step 4.4: calculating the included angle theta between the perpendicular bisector and the horizontal line of the human body b If theta 2 <θ b <θ 3 Angle of inclusion theta b Continuously stays in this range for a time T, and c < V o D, and the key points of the bones of the human head can be detected, the drowning is judged to be finally done, otherwise, the drowning is judged not to be done, and theta is shown in the formula 2 The minimum value of the angle between the perpendicular bisector and the horizontal plane, theta, of a drowned person 3 The included angle between the perpendicular bisector and the horizontal plane of the drowned human body is represented as the maximum value, c represents the minimum value of the linear velocity, and d represents the maximum value of the linear velocity.
A human body drowning detection method based on an intelligent bracelet comprises the following steps:
s1: acquiring the heart rate and the underwater depth of a normal swimming person and a simulated drowning person in water and the X, Y, Z axial acceleration by using the intelligent wristband respectively;
s2: constructing a drowning detection model by adopting a support vector machine, and training the drowning detection model by using the data obtained in the step S1 as the input of the drowning detection model;
s3: acquiring the heart rate and the underwater depth of a human body to be detected and the X, Y, Z axial acceleration of the human body to be detected in real time by using an intelligent bracelet;
s4: the data acquired in the step S3 is used as the input of a drowning detection model to detect the drowning phenomenon in real time, if the drowning phenomenon is detected, the step S5 is executed, otherwise, the step S3 is executed;
s5: and acquiring the position of the drowned human body and sending out a drowning alarm.
The support vector machine in step S2 refers to a support vector machine model disclosed in "summary of theory and algorithm research on support vector machine" Ding Shifei published by "journal of university of electronic technology" 40, 2011.
The drowning person rescue method comprises the following steps:
1) The method comprises the following steps that a server carries out drowning detection according to images of human bodies in water collected by an unmanned aerial vehicle, an underwater camera or an underwater robot and the like, or carries out drowning detection according to heart rate, underwater depth and triaxial acceleration data of people collected by an intelligent bracelet, when the drowning phenomenon of a human body is detected, the position of a drowning person is determined, the positioning coordinate of the drowning person and the image of the drowning person are sent to the underwater robot, the positioning coordinate of the drowning person is sent to the unmanned aerial vehicle, step 2) is executed, and otherwise, step 1) is executed repeatedly;
2) The unmanned aerial vehicle puts in an underwater robot at a rescue place according to the positioning coordinates of drowned people;
3) Controlling the underwater robot to travel to the position of drowned people in water, collecting images of the people at the position, and judging and confirming whether the underwater robot is a rescue target;
4) Controlling the underwater robot to enable a human body supporting seat of the underwater robot to be aligned with the back of a drowning person;
5) Controlling the rescue fish to draw the rescue belt to surround the body of the drowned person; after the life-saving fish pulls the life-saving belt to surround the drowned human body, the life-saving fish is controlled to perform circular motion around the human body supporting seat, and the life-saving belt is continuously tightly wound at the root of the human body supporting seat; after the life belt is wound, the life-saving fish is controlled to swim back to the containing chamber, and the life belt is inflated;
6) And controlling the underwater robot to drive to the water surface to enable the head of the drowning person to be exposed out of the water surface.
The improved YOLOv5s model of the embodiment is a lightweight model, balance is achieved between detection precision and speed, and timeliness of detection of drowning of people is improved.
When the underwater robot 100 inflates the life belt, the electromagnetic valve at the inflation end of the life belt 103 is opened through the controller, and liquid nitrogen in the high-pressure gas tank 108 is gasified after flowing through the electromagnetic valve, so that the life belt 103 is filled.
In the embodiment, the underwater robot 1 further includes a target recognition module running on a processor of the controller, and the target recognition module recognizes a human body in water according to an image shot by the camera 104, compares the image feature of the human body with an image feature of drowned people provided by the server according to the image feature of the human body, and further determines whether the image feature is a rescue target.

Claims (6)

1. The intelligent underwater lifesaving system is characterized by comprising an underwater robot (100), wherein the underwater robot (100) comprises a main body (101), a human body supporting seat (105), a lifesaving belt (103) and lifesaving fish (102), and the front end of the lifesaving belt (103) is connected with the lifesaving fish (102);
a controller, a first wireless communication module and a first GPS module which are connected with the controller are arranged in the main body (101), a camera (104) is arranged at the head of the main body, and the camera (104) is connected with the controller through a data line;
the main body (101) is provided with a first turbine (106) for providing ascending and descending power in water, the tail part of the main body (101) is provided with a second turbine (107) for providing advancing, backing and steering power in water, and the control ends of the first turbine (106) and the second turbine (107) are respectively connected with the controller;
the tail end of the life belt (103) is connected with the side part of the main body (101);
the rescue fish (102) comprises a fish-imitating body (1021), a bionic fin (1024) and a propeller thruster (1022) arranged at the tail of the fish-imitating body, wherein the rear end of the propeller thruster (1022) is provided with a traction part (1023) for connecting and dragging a rescue belt;
a first microprocessor and a second wireless communication module connected with the first microprocessor are arranged in the fish-imitating body (1021), and the control end of a driving motor of the propeller thruster (1022) is connected with the control signal output end of the first microprocessor;
the longitudinal axis of the bionic fin (1024) is connected with the output shaft of the steering engine, and the control end of the steering engine is connected with the control signal output end of the first microprocessor;
the life belt (103) is a hollow airtight belt-shaped structure, the inflation end of the life belt (103) is connected with the high-pressure air tank (108) through an electromagnetic valve, and the control end of the electromagnetic valve is connected with the control signal output end of the controller.
2. An intelligent underwater lifesaving system according to claim 1, wherein the underwater robot (100) further comprises a first geomagnetic sensor connected to a controller thereof.
3. The intelligent underwater lifesaving system according to claim 2, further comprising an unmanned aerial vehicle (200), wherein a connecting rod (201) for connecting with the underwater robot is arranged below a cantilever of the unmanned aerial vehicle (200), an electric hydraulic clamp (202) is arranged at the connecting end of the connecting rod (201) and the underwater robot, an occlusion head of the electric hydraulic clamp (202) is matched with a connecting hole site (111) at the top of the underwater robot, and a control end of the electric hydraulic clamp (202) is connected with a control unit of the unmanned aerial vehicle;
the unmanned aerial vehicle (200) further comprises a second GPS module and a third wireless communication module, and the second GPS module and the third wireless communication module are respectively connected with the control unit of the unmanned aerial vehicle.
4. The intelligent underwater lifesaving system according to claim 3, further comprising a server, wherein the server is in communication connection with the underwater robot (100) and the unmanned aerial vehicle (200), and the server detects whether the human body in the image is drowned according to the images provided by the underwater robot (100) and the unmanned aerial vehicle (200).
5. An intelligent underwater lifesaving system as claimed in claim 4, wherein the system further comprises an intelligent bracelet (300), the intelligent bracelet (300) comprises a second microprocessor and a heart rate sensor, a water pressure sensor, a third GPS module, a fourth wireless communication module and a third geomagnetic sensor which are respectively connected with the second microprocessor, and the intelligent bracelet (300) is in communication connection with the server.
6. The drowning person rescue method of the intelligent underwater rescue system of claim 4 or 5, comprising the steps of:
1) Detecting drowning of a human body in water, determining the position of a drowning person when the drowning phenomenon of the human body is detected, and executing the step 2), otherwise, repeatedly executing the step 1);
2) Throwing an underwater robot at a rescue place according to the position of drowned people;
3) Controlling the underwater robot to travel to the position of drowned people in water, collecting images of the people at the position, and judging and confirming whether the underwater robot is a rescue target;
4) Controlling the underwater robot to enable a human body supporting seat of the underwater robot to be aligned with the back of a drowning person;
5) Controlling the rescue fish to pull the rescue belt to surround the body of the drowned person;
6) And controlling the underwater robot to drive to the water surface to enable the head of the drowning person to be exposed out of the water surface.
CN202210228026.6A 2022-03-08 2022-03-08 Intelligent underwater lifesaving system and drowning detection and rescue method Active CN114735165B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210228026.6A CN114735165B (en) 2022-03-08 2022-03-08 Intelligent underwater lifesaving system and drowning detection and rescue method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210228026.6A CN114735165B (en) 2022-03-08 2022-03-08 Intelligent underwater lifesaving system and drowning detection and rescue method

Publications (2)

Publication Number Publication Date
CN114735165A CN114735165A (en) 2022-07-12
CN114735165B true CN114735165B (en) 2023-02-07

Family

ID=82275614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210228026.6A Active CN114735165B (en) 2022-03-08 2022-03-08 Intelligent underwater lifesaving system and drowning detection and rescue method

Country Status (1)

Country Link
CN (1) CN114735165B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115973373B (en) * 2023-02-08 2024-06-21 浙江中裕通信技术有限公司 Emergency rescue method and equipment for marine distress

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150307172A1 (en) * 2014-04-29 2015-10-29 James Ng Robotic Drowning Rescue System
CN107351992B (en) * 2017-07-16 2019-06-21 泰州祥云软件开发有限公司 Rescue mode and its system under a kind of intelligent water based on unmanned plane
CN208876535U (en) * 2017-12-26 2019-05-21 深圳和而泰数据资源与云技术有限公司 A kind of anti-drowning self-rescuing device and waistband
CN111191486B (en) * 2018-11-14 2023-09-05 杭州海康威视数字技术股份有限公司 Drowning behavior recognition method, monitoring camera and monitoring system
CN112906545B (en) * 2021-02-07 2023-05-05 广东省科学院智能制造研究所 Real-time action recognition method and system for multi-person scene
CN113158962A (en) * 2021-05-06 2021-07-23 北京工业大学 Swimming pool drowning detection method based on YOLOv4
CN113306683A (en) * 2021-06-03 2021-08-27 娄少回 Self-binding water lifesaving equipment and water lifesaving method thereof
CN113291440B (en) * 2021-06-04 2022-11-11 大连海事大学 Water surface rescue method and device for unmanned ship capable of flying
CN113581421B (en) * 2021-07-30 2023-03-14 厦门大学 Robot for underwater rescue
CN114132459B (en) * 2021-12-07 2023-07-14 浙江海洋大学 Controllable diving self-propulsion U-shaped power life buoy and control system and control method

Also Published As

Publication number Publication date
CN114735165A (en) 2022-07-12

Similar Documents

Publication Publication Date Title
CN109522793A (en) More people&#39;s unusual checkings and recognition methods based on machine vision
CN113291440B (en) Water surface rescue method and device for unmanned ship capable of flying
CN106873627A (en) A kind of multi-rotor unmanned aerial vehicle and method of automatic detecting transmission line of electricity
CN110200598A (en) A kind of large-scale plant that raises sign exception birds detection system and detection method
CN114735165B (en) Intelligent underwater lifesaving system and drowning detection and rescue method
CN109683629A (en) Unmanned plane electric stringing system based on integrated navigation and computer vision
CN114560059B (en) Underwater lifesaving robot and rescuing method
CN111275924B (en) Unmanned aerial vehicle-based child drowning prevention monitoring method and system and unmanned aerial vehicle
CN114724177B (en) Human body drowning detection method combining Alphapos and YOLOv5s models
CN112357030B (en) A water quality monitoring machine fish for ocean or inland river lake
CN116255908B (en) Underwater robot-oriented marine organism positioning measurement device and method
CN113139497A (en) System and method for identifying water surface object and application based on 5G MEC
CN112489372A (en) Swimming pool monitoring and alarming system
CN110626474A (en) Man-machine cooperative intelligent life buoy and use method thereof
CN111252212B (en) Automatic rescue method and system for multiple drowning people by cooperation of navigable lifesaving device and unmanned aerial vehicle
CN102156994A (en) Joint positioning method of single-view unmarked human motion tracking
CN118124757A (en) Anti-drowning monitoring and rescuing method for AI unmanned aerial vehicle
CN111428696A (en) Intelligent abnormity detection emergency rescue method, device and system
CN109533228B (en) Intelligent underwater search and rescue robot
CN112345531B (en) Transformer fault detection method based on bionic robot fish
KR20230016730A (en) An automatic landing system to guide the drone to land precisely at the landing site
CN113044184A (en) Deep learning-based water rescue robot and drowning detection method
CN113313757B (en) Cabin passenger safety early warning algorithm based on monocular ranging
CN114620218A (en) Unmanned aerial vehicle landing method based on improved YOLOv5 algorithm
CN210155326U (en) Relative position appearance measurement system based on near-infrared beacon

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant