CN114560059A - Underwater lifesaving robot and rescuing method - Google Patents

Underwater lifesaving robot and rescuing method Download PDF

Info

Publication number
CN114560059A
CN114560059A CN202210227619.0A CN202210227619A CN114560059A CN 114560059 A CN114560059 A CN 114560059A CN 202210227619 A CN202210227619 A CN 202210227619A CN 114560059 A CN114560059 A CN 114560059A
Authority
CN
China
Prior art keywords
lifesaving
human body
underwater
fish
belt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210227619.0A
Other languages
Chinese (zh)
Other versions
CN114560059B (en
Inventor
李雪桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges University CTGU
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN202210227619.0A priority Critical patent/CN114560059B/en
Publication of CN114560059A publication Critical patent/CN114560059A/en
Application granted granted Critical
Publication of CN114560059B publication Critical patent/CN114560059B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63CLAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
    • B63C9/00Life-saving in water
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63CLAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
    • B63C9/00Life-saving in water
    • B63C9/01Air-sea rescue devices, i.e. equipment carried by, and capable of being dropped from, an aircraft
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63CLAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
    • B63C9/00Life-saving in water
    • B63C9/28Adaptations of vessel parts or furnishings to life-saving purposes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/001Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/08Propulsion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/38Arrangement of visual or electronic watch equipment, e.g. of periscopes, of radar
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D1/00Dropping, ejecting, releasing, or receiving articles, liquids, or the like, in flight
    • B64D1/02Dropping, ejecting, or releasing articles
    • B64D1/08Dropping, ejecting, or releasing articles the articles being load-carrying devices
    • B64D1/12Releasing
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/08Alarms for ensuring the safety of persons responsive to the presence of persons in a body of water, e.g. a swimming pool; responsive to an abnormal condition of a body of water
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/08Alarms for ensuring the safety of persons responsive to the presence of persons in a body of water, e.g. a swimming pool; responsive to an abnormal condition of a body of water
    • G08B21/088Alarms for ensuring the safety of persons responsive to the presence of persons in a body of water, e.g. a swimming pool; responsive to an abnormal condition of a body of water by monitoring a device worn by the person, e.g. a bracelet attached to the swimmer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/14Relay systems
    • H04B7/15Active relay systems
    • H04B7/185Space-based or airborne stations; Stations for satellite systems
    • H04B7/18502Airborne stations
    • H04B7/18506Communications with or from aircraft, i.e. aeronautical mobile service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/70Services for machine-to-machine communication [M2M] or machine type communication [MTC]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63CLAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
    • B63C9/00Life-saving in water
    • B63C2009/0017Life-saving in water characterised by making use of satellite radio beacon positioning systems, e.g. the Global Positioning System [GPS]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/001Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations
    • B63G2008/002Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations unmanned
    • B63G2008/005Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations unmanned remotely controlled
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Ocean & Marine Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Astronomy & Astrophysics (AREA)
  • Emergency Lowering Means (AREA)
  • Emergency Alarm Devices (AREA)

Abstract

The invention relates to an underwater lifesaving robot, which comprises a main body, a human body supporting seat, a lifesaving belt and lifesaving fish, wherein the front end of the lifesaving belt is connected with the lifesaving fish; the main body is internally provided with a controller, a first wireless communication module and a GPS module which are connected with the controller, the head part of the main body is provided with a camera, and the camera is connected with the controller through a data line; the main body is provided with a first turbine for providing ascending and descending power in water, the tail part of the main body is provided with a second turbine for providing advancing, retreating and steering power in water, and the control ends of the first turbine and the second turbine are respectively connected with the controller. The tail end of the life belt is connected with one side of the human body supporting seat; the other side of the human body supporting seat is provided with a mechanical claw used for holding and fixing the front end of the life belt, and a mechanical claw control unit is connected with the controller. The invention provides an underwater robot for underwater rescue, which is characterized in that a life belt is used for winding drowned people to lift the head of the people upwards out of the water surface.

Description

Underwater lifesaving robot and rescuing method
Technical Field
The invention belongs to the field of water lifesaving, and particularly relates to an underwater lifesaving robot and a rescuing method.
Background
The water rescue has very high requirements on timeliness and stability, and relates to a plurality of leading-edge technologies in a wide subject field such as computers, automatic control, sensors and the like.
From the current research situation of water rescue at home and abroad, the research results of the maneuvering rescue of the unmanned aerial vehicle and the underwater lifesaving robot are relatively few, most of the maneuvering rescue is still in the stage of primary research and test, and the comprehensive technical level and the service level are low. Aiming at swimming scenes of different water areas with dense people and sparse people, the overwater rescue strategy in the prior art only realizes how to detect and position, wherein how to realize quick maneuvering rescue, the quick rescue service providing is not well realized, and the phenomenon that the rescue service cannot be provided in time because people know to drown exists; meanwhile, the application of rescue technology compatible with open water areas and indoor swimming venues and the practicability of rescue are lacked. The intelligent bracelet and the rescue unmanned aerial vehicle are not fused, so that the existing domestic and foreign water life-saving system is always incomplete, a large gap exists in practical application, and the comprehensive technical level and the service level of the rescue scheme are low.
Disclosure of Invention
The invention aims to solve the problems and provides an underwater lifesaving robot and a drowned person rescuing method, wherein the underwater lifesaving robot for performing underwater autonomous rescue on drowned persons is designed and provided, the underwater lifesaving robot, an unmanned aerial vehicle, an intelligent bracelet and a server are integrated, and the underwater lifesaving robot and the intelligent bracelet are used for performing indoor and outdoor accurate positioning by utilizing the complementation of a geomagnetic positioning technology and a GPS positioning technology; the image of collection is shot in real time according to lifesaving robot or the camera under water to the server on the one hand, utilize the gesture of the human body in the neural network model discernment image, judge whether there is drowned phenomenon according to human gesture, on the other hand is according to the aquatic personnel's of intelligent bracelet real-time collection rhythm of the heart, degree of depth and X, Y, Z axle acceleration under water, whether the drowned phenomenon of personnel appears in real time judgement through machine learning algorithm, through the drowned phenomenon of dual mode real-time detection, discover drowned personnel as early as possible, avoid the rescue untimely lead to the casualties.
The invention has the technical scheme that the underwater lifesaving robot comprises a main body, a human body supporting seat, a lifesaving belt and lifesaving fish, wherein the front end of the lifesaving belt is connected with the lifesaving fish; the main body is internally provided with a controller, a first wireless communication module and a GPS module which are connected with the controller, the head part of the main body is provided with a camera, and the camera is connected with the controller through a data line; the main body is provided with a first turbine for providing ascending and descending power in water, the tail part of the main body is provided with a second turbine for providing advancing, backing and steering power in water, and the control ends of the first turbine and the second turbine are respectively connected with the controller;
the tail end of the life belt is connected with one side of the human body supporting seat; the other side of the human body supporting seat is provided with a mechanical claw used for holding and fixing the front end of the life belt, and a mechanical claw control unit is connected with the controller.
The lifesaving fish comprises a fish-imitating body and a propeller thruster arranged at the tail part of the fish-imitating body, and the tail part of the lifesaving fish is provided with a traction part used for connecting and dragging the lifesaving belt. The fish-imitating body is internally provided with a microprocessor and a second wireless communication module connected with the microprocessor, and the control end of a driving motor of the propeller thruster is connected with the control signal output end of the microprocessor. The longitudinal rotating shaft of the propeller is connected with the output shaft of the steering engine, and the control end of the steering engine is connected with the control signal output end of the microprocessor. Under the driving of the steering engine, the advancing direction of the life-saving fish is controlled by adjusting the angle between the axis of the propeller thruster and the central line of the fish-imitating body. The first wireless communication module is connected with the first wireless communication module of the underwater lifesaving robot through the second wireless communication module.
Preferably, the underwater lifesaving robot further comprises a first geomagnetic sensor, and the first geomagnetic sensor is connected with the controller through a data line.
Preferably, a containing chamber for containing the life belt and the life fish is arranged at the connecting side of the human body supporting seat and the life belt of the underwater life-saving robot.
Furthermore, the life belt is of a hollow airtight belt-shaped structure, the inflating end of the life belt is connected with the high-pressure air tank through an electromagnetic valve, and the control end of the electromagnetic valve is connected with the control signal output end of the controller.
Preferably, the life-saving fish further comprises a second geomagnetic sensor electrically connected with the microprocessor.
Preferably, the underwater lifesaving robot is in communication connection with the server, the server detects whether the human body in the image drowns according to the image provided by the underwater lifesaving robot or the underwater camera, detects the drowning phenomenon, and sends the image and the position of the drowned person to the underwater lifesaving robot.
A human body drowning detection method based on an underwater human body image comprises the following steps:
step 1: shooting to obtain an image of a human body in water;
step 2: carrying out human body target detection on the image by using a YOLOv5s model to obtain a human body positioning frame;
and step 3: recognizing the human body posture in the human body positioning frame by adopting an Alphapos model to obtain a bone key point and determine a coordinate of the bone key point;
and 4, step 4: calculating to obtain the body posture characteristics of the human body according to the coordinates of the key points of the human skeleton;
and 5: judging whether drowning occurs according to the body state characteristics of the human body, judging the length of drowning time if drowning occurs, and giving an alarm.
The drowning person rescue method comprises the following steps:
1) detecting drowning of a human body in water, determining the position of a drowning person when the drowning phenomenon of the human body is detected, and executing the step 2), otherwise, repeatedly executing the step 1);
2) throwing an underwater lifesaving robot at a rescue place according to the position of drowned people;
3) controlling the underwater lifesaving robot to run to the position of drowned people in water, acquiring images of the people at the position, and judging and confirming whether the underwater lifesaving robot is a rescue target;
4) controlling the underwater lifesaving robot to enable a human body supporting seat of the underwater lifesaving robot to be aligned to the back of a drowning person;
5) controlling the rescue fish to draw the rescue belt to surround the body of the drowned person; after the life-saving fish pulls the life-saving belt to surround the drowned human body, the mechanical claw is controlled to fix the traction end of the life-saving belt and the life-saving belt is inflated;
6) the underwater lifesaving robot is controlled to drive to the water surface, so that the head of the drowning person is exposed out of the water surface.
Compared with the prior art, the invention has the beneficial effects that:
1) the invention provides an underwater lifesaving robot for performing underwater rescue on drowned people, which is provided with a human body supporting seat and a lifesaving belt, can be automatically lifted and turned in water, and when the drowned people are rescued, the drowned people are wound on the human body supporting seat by the lifesaving belt to lift the head of the human body out of the water surface upwards;
2) the underwater lifesaving robot comprises lifesaving fish, when drowning people are rescued, the lifesaving fish pulls the lifesaving belt to complete the surrounding of the body of the drowning people, the independence of the lifesaving fish is good, the head of the drowning people is conveniently lifted away from the water surface after the body of the drowning people is wound on a human body supporting seat of the underwater lifesaving robot, and the drowning people can be rescued in time;
3) the invention utilizes the geomagnetic positioning technology to make up the defect that the indoor GPS positioning signal is weak, realizes accurate positioning of the drowning person in indoor water, and is convenient for timely rescue;
4) according to the method, a YOLOv5s model is used for detecting the human body target of the image, after a human body positioning frame is obtained, an improved Alphapos model is used for identifying the human body posture in the human body positioning frame, skeleton key points are obtained, coordinates of the skeleton key points are determined, human body posture characteristics are obtained through calculation, the real-time judgment of drowning phenomenon of personnel is realized, and the drowning detection accuracy is high;
5) the rescue method has high rescue success rate and good protection for drowned people in the rescue process.
Drawings
The invention is further illustrated by the following figures and examples.
Fig. 1 is a schematic view of an underwater lifesaving robot according to an embodiment of the present invention.
Fig. 2 is a schematic view of a human body supporting seat of the underwater lifesaving robot according to the embodiment of the invention.
Fig. 3 is a schematic structural view of a life-saving fish according to an embodiment of the present invention.
Fig. 4 is a circuit structure block diagram of the underwater lifesaving robot of the embodiment of the invention.
Fig. 5 is a schematic view of a drone for commissioning an underwater rescue robot.
Fig. 6 is a schematic structural diagram of the YOLOv5s model.
Fig. 7 is a schematic diagram of an improved YOLOv5s model according to an embodiment of the present invention.
FIG. 8 is a schematic diagram of the Alphapos model according to an embodiment of the invention.
FIG. 9a is a schematic diagram of key points of human bones according to an embodiment of the present invention.
Fig. 9b is a schematic diagram of the underwater human skeleton key points according to the embodiment of the invention.
Fig. 10 is a schematic flow chart of a human drowning detection method according to an embodiment of the present invention.
Detailed Description
As shown in fig. 1 to 4, the underwater lifesaving robot 100 comprises a main body 101, a human body supporting base 105, a lifesaving belt 103 and a lifesaving fish 102, wherein the front end of the lifesaving belt 103 is connected with the lifesaving fish 102; a controller, a first wireless communication module, a GPS module and a first geomagnetic sensor which are connected with the controller are arranged in the main body 101, a camera 104 is arranged at the head of the main body, and the camera 104 is connected with the controller through a data line; the main body 101 is provided with a first turbine 106 for providing ascending and descending power in water, the tail part of the main body 101 is provided with a second turbine 107 for providing advancing, backing and steering power in water, and the control ends of the first turbine 106 and the second turbine 107 are respectively connected with the controller. The tail end of the life belt 103 is connected with one side of the human body supporting seat 105; the other side of the human body supporting seat 105 is provided with a mechanical claw 111 for grasping and fixing the front end of the life belt, and a mechanical claw control unit is connected with the controller.
The gripper 111 of the embodiment uses a gripper of the related art, and the gripping of the gripper is controlled by a servomotor.
The connection side of the human body supporting seat 105 and the life belt of the underwater life-saving robot is provided with a containing chamber 109 for containing the life belt and life fish. The life belt 103 is a hollow airtight belt-shaped structure, the inflation end of the life belt 103 is connected with the high-pressure air tank 108 through an electromagnetic valve, and the control end of the electromagnetic valve is connected with the control signal output end of the controller. In the embodiment, the high-pressure gas tank 108 stores liquid nitrogen gas therein.
As shown in fig. 3, the rescue fish 102 includes a fish-like body 1021 and a propeller 1022 provided at the tail of the fish-like body, and the tail of the rescue fish is provided with a towing portion 1023 for connecting and towing a rescue belt. A microprocessor, a second wireless communication module and a second geomagnetic sensor connected with the microprocessor are arranged in the fish-imitating body 1021, and a control end of a driving motor of the propeller 1022 is connected with a control signal output end of the microprocessor. The longitudinal rotating shaft of the propeller 1022 is connected with the output shaft of the steering engine 1024, and the control end of the steering engine 1024 is connected with the control signal output end of the microprocessor. Under the driving of the steering engine, the advancing direction of the life-saving fish is controlled by adjusting the angle between the axis of the propeller thruster and the central line of the fish-imitating body.
In the embodiment, the underwater lifesaving robot is in communication connection with the server, the server detects whether the human body in the image drowns according to the image provided by the underwater lifesaving robot or the underwater camera and the unmanned aerial vehicle 200, detects the drowning phenomenon, and sends the image and the position of the drowned person to the underwater lifesaving robot. The unmanned aerial vehicle and the underwater lifesaving robot of the embodiment both adopt a mode of combining geomagnetic induction positioning and GPS positioning, and carry out accurate positioning by utilizing a geomagnetic sensor in the environment with weak GPS signals such as an indoor swimming pool and the like.
As shown in fig. 5, a connecting rod 201 for connecting with the underwater lifesaving robot is arranged below a cantilever of the unmanned aerial vehicle 200, an electric hydraulic clamp 202 is arranged at a connecting end of the connecting rod 201 and the underwater lifesaving robot, a snap head of the electric hydraulic clamp 202 is matched with a connecting hole 111 at the top of the underwater lifesaving robot, and a control end of the electric hydraulic clamp 202 is connected with a control unit of the unmanned aerial vehicle; the unmanned aerial vehicle 200 further comprises a second GPS module and a third wireless communication module, and the second GPS module and the third wireless communication module are respectively connected with the control unit of the unmanned aerial vehicle.
In the embodiment, when the underwater lifesaving robot 100 is transported by the unmanned aerial vehicle 200 in the air, the control unit respectively controls the engagement heads of the electric hydraulic tongs 202 at the bottom ends of the connecting rods below the 4 cantilevers of the unmanned aerial vehicle to engage with the connecting hole 111 at the top of the underwater lifesaving robot, so that the underwater lifesaving robot 100 is connected with the unmanned aerial vehicle 200, and the unmanned aerial vehicle 200 carries the underwater lifesaving robot 100 to fly. When the unmanned aerial vehicle 200 puts in the underwater lifesaving robot 100 in the air, the control unit respectively controls the engagement heads of the electric hydraulic tongs 202 at the bottom ends of the connecting rods below the 4 cantilevers of the unmanned aerial vehicle to be opened, and the underwater lifesaving robot 100 is separated from the unmanned aerial vehicle 200 under the action of gravity to finish operation.
The second wireless communication module of the lifesaving fish 102 is in communication connection with the first wireless communication module of the underwater lifesaving robot 100; the first wireless communication module of the underwater lifesaving robot 100, the third wireless communication module of the unmanned aerial vehicle 200 and the fourth wireless communication module of the smart band 300 are in communication connection with the server through wireless routers respectively.
The postures of a person in water can be roughly classified into two types: one is parallel to the water surface, one is perpendicular to the water surface, and the other can be categorized as perpendicular to the water surface. Generally, people drown can be classified into the second type, the posture is vertical to the water surface, at the moment, the consciousness of people is still kept, the duration is possibly longer, and the drowning characteristic is easier to obtain. Therefore, when the person in water just shows drowning, the reliable and rapid detection of the drowning state has more practical significance. As shown in fig. 9a, 18 skeletal key points of the human body are extracted, and the posture of the human body is determined by the relative positions of the 18 skeletal key points.
When drowning, the posture characteristic of human body includes:
1) drowned characteristic under water: the feet swing ceaselessly, the body floats up and down, and the position coordinates of the feet change along with the time;
2) final drowning characteristics: the human body is parallel to the water surface and floats on the water surface, and whether the key skeleton points change relatively or not can be judged along with the time so as to draw a conclusion whether the final drowning characteristic appears or not.
When a person is just drowned, the body of the person tends to face upward, the legs of the person under water float up and down, the whole body of the person is not easily recognized from an image shot under the water surface, and the change of the relative position of the skeletal key points with time can be detected by taking the skeletal key points of the lower half of the person shown in fig. 9b as a core, so that the state of drowning under water can be detected.
As shown in fig. 10, the method for detecting human drowning based on the human body image in water includes the following steps:
step 1: shooting to obtain an image of a human body in water;
step 2: carrying out human body target detection on the image by using the improved YOLOv5s model to obtain a human body positioning frame;
as shown in fig. 6, the network model of YOLOv5s includes a Backbone network backhaul, a multi-scale feature fusion module Neck, and a Prediction end Prediction.
In the first layer Focus of the backhaul network, YOLOv5s of the embodiment reconstructs a high-resolution image, stacks four points around each pixel point to obtain a low-resolution image, improves the field of view of each point, and reduces loss of original information, so as to reduce the amount of computation and increase the computation speed. The third layer C3/BottleNeckCSP of the Backbone network comprises a Bottleneck part and a CSP part, and the aim is to improve the learning capability of the convolutional neural network. And after adopting 5/9/13 maximum pooling, the SPP module of the ninth layer of the Backbone network performs concat fusion, and improves the visual field of each point. The multi-scale feature fusion module Neck of YOLOv5s of the embodiment adopts Mask R-CNN and FPN frameworks to help pixels convey strong positioning features, and the network feature fusion capability can be enhanced by using the Mask R-CNN and the FPN frameworks simultaneously.
The Prediction module of yollov 5s of an embodiment contains a loss function Bounding box and a non-maximum Suppression (NMS). The parameter iou-thres is added into yolov5s in the embodiment, so that the problem of non-overlapping boundary frames is effectively solved, the value is set to be 0.45, the effect is best, and the regression speed and precision of the prediction frame are effectively improved. The YOLOv5s of the embodiment uses NMS in the human body detection positioning stage, enhances the recognition capability to complex environments such as light shadow, shading and the like, and obtains an optimal target detection frame.
As shown in fig. 7, in the modified YOLOv5s model of the embodiment, based on the YOLOv5s model, convolution weight coefficients are added to both the convolutional layer of the backhaul network and the convolutional layer of the head network, upsampling coefficients are added to both the upsampling layer of the backhaul network and the upsampling layer of the head network, the number of bottleeckcsp modules is set to (3, 6, 6, 3), and the number of categories nc is set to 1.
The improved target detection model YOLOv5s of the example has model parameters of 8.1M, whereas the prior art YOLOv5s has model parameters of 7.1M, M representing mega.
And step 3: recognizing the human body posture in the human body positioning frame by adopting an Alphapos model to obtain a bone key point and determine a coordinate of the bone key point;
as shown in fig. 8, the alphaphase model of the embodiment uses a Regional Multi-Person position Estimation Network (RMPE) to detect and recognize the Multi-Person position according to the human body positioning frame of the image, and the RMPE includes a Space Transformation Network (STN), a Single Person Position Estimation (SPPE), a Spatial De-Transformer Network (SDTN), and a position Non-Maximum suppressor (PPNMS) connected in sequence.
The STN is used for segmenting and extracting a single human body positioning frame from the images of the plurality of human body positioning frames so as to further perform single posture recognition; the SDTN is used for converting and restoring the detected and recognized human posture into an original image, the SDTN realizes a reverse process of the STN, and the STN and the SDTN form a Symmetric Space Transformation Network (SSTN). The SPPE is used for detecting and identifying human postures according to the human body positioning frame. The SSTN and SPPE combination of the embodiment realizes the detection and identification of the postures of a plurality of persons in the image. The PPNMS is used for eliminating redundant attitude estimation, and deleting redundant detection frames by defining the distance calculation similarity degree of the attitude, so that the accuracy of human attitude detection and identification is improved.
In the process of training RMPE, a Pipeline parallel to SSTN is added, and the parallel Pipeline comprises a position-Guided samples Generator (PGPG) and a parallel SPPE (parallel SPPE). The PGPG generates a sample of the human body positioning frame which obeys specific distribution according to the determined real posture and the human body positioning frame in the training sample, and increases the sample with inaccurate human body positioning frame so as to train and improve the adaptability of the SSTN and SPPE combined posture detection model of the embodiment to the inaccurate human body positioning frame. In order to improve the accuracy of human posture detection and recognition, the posture mark output by the SPPE is centered. The parallel SPPE and the SPPE combined with the SSTN in the posture detection model share the STN, the posture mark of the parallel SPPE is compared with the real centered human posture of the training sample in the training process, the deviation is calculated, the deviation is reversely transmitted to the STN so as to train the STN, the STN can be focused in a correct area, and a high-accuracy human body area is extracted.
RMPE of the examples refer to the paper "RMPE: RMPE model published by Regional Multi-Person Pose Estimation.
In the embodiment, the human body target in the image is detected by using the alphaphase model, so that 18 key points of the human body skeleton shown in fig. 9a are obtained, and the coordinates of the key points in the image are obtained.
And 4, step 4: calculating to obtain human body posture characteristics according to the coordinates of key points of human skeleton, judging whether drowning occurs according to the human body posture characteristics, judging the length of drowning time if drowning occurs, and giving an alarm;
step 4.1: respectively calculating the linear velocity V of the key points (10) and (13) of bones corresponding to the feet of the human body10、V13And using the bone barrierCalculating the linear velocity of the key points (11) and (13) to obtain the velocity V of the single leg of the human body11-13(ii) a Calculating linear velocity V of human chest gravity center OO
In an embodiment, the linear velocity is calculated from image frames with time intervals in consecutive M-frame images. The position of the gravity center O of the chest of the human body is calculated according to the coordinates of the key points (1), (8) and (11) of the bones.
Step 4.2: calculating the linear velocity V of the human foot10、V13Linear velocity V with human body gravity center OOThe proportion of (2) preliminarily judging whether the drowning phenomenon occurs or not;
the formula for judging drowning is as follows:
Figure BDA0003536690600000071
in the formula, lambda represents a set threshold value, and alpha and beta respectively represent an upper limit value and a lower limit value of the speed of one leg of the human body;
if V10、V13、VO、V11-13If the formula (I) is satisfied, primarily judging that the underwater drowning is performed, and ending; if V10、V13、VO、V11-13If the formula (I) is not satisfied, executing the step 4.3;
step 4.3: calculating the included angle theta between the perpendicular bisector of the upper half of the human body and the horizontal planeaIf theta0<θa<θ1Angle of inclination thetaaIs continuously kept in the range for a time T, and a < VOIf < b, the drowning is judged to be finally done, and the operation is finished.
In the formula [ theta ]0Represents the minimum value of the angle between the perpendicular bisector of the upper half of the drowned human body and the horizontal plane, theta1The maximum value of an included angle between a perpendicular bisector of the upper half of the drowned human body and a horizontal plane is shown, a represents a lower limit value of the speed of the gravity center line of the human body, and b represents an upper limit value of the speed of the gravity center line of the human body; otherwise, executing step 4.4;
step 4.4: calculating the included angle theta between the perpendicular bisector and the horizontal line of the human bodybIf theta2<θb<θ3Angle of inclination thetabContinuously kept within the range for a time TAnd c < VoD, and the key points of the bones of the human head can be detected, the drowning is judged to be finally done, otherwise, the drowning is judged not to be done, and theta is shown in the formula2Represents the minimum value of the included angle between the perpendicular bisector and the horizontal plane of a drowned human body, theta3The included angle between the perpendicular bisector and the horizontal plane of the drowned human body is represented as the maximum value, c represents the minimum value of the linear velocity of the center of gravity of the human body, and d represents the maximum value of the linear velocity of the center of gravity of the human body.
The rescue method for drowning people comprises the following steps:
1) the server carries out drowning detection according to images of human bodies in water collected by an unmanned aerial vehicle, an underwater camera or an underwater lifesaving robot, determines the position of drowned personnel when the drowning phenomenon of the human bodies is detected, sends the positioning coordinates of the drowned personnel and the images of the drowned personnel to the underwater lifesaving robot, sends the positioning coordinates of the drowned personnel to the unmanned aerial vehicle, and executes the step 2), otherwise, the step 1 is repeatedly executed;
2) the unmanned aerial vehicle puts an underwater lifesaving robot in a rescue place according to the positioning coordinates of drowned people;
3) controlling the underwater lifesaving robot to run to the position of drowned people in water, acquiring images of the people at the position, and judging and confirming whether the underwater lifesaving robot is a rescue target;
4) controlling the underwater lifesaving robot to enable a human body supporting seat of the underwater lifesaving robot to be aligned to the back of a drowning person;
5) controlling the rescue fish to draw the rescue belt to surround the body of the drowned person; after the life-saving fish pulls the life-saving belt to encircle the drowned human body, the mechanical claws are controlled to grasp the life-saving fish or the traction end of the life-saving belt to fix the traction end of the life-saving belt; after the life belt is wound and fixed, the life belt is inflated;
6) the underwater lifesaving robot is controlled to drive to the water surface, so that the head of the drowning person is exposed out of the water surface.
The improved YOLOv5s model is a lightweight model, balance between detection precision and speed is achieved, and timeliness of detection of drowning phenomena of people is improved.
When the underwater lifesaving robot 100 inflates the lifesaving belt, the electromagnetic valve at the inflating end of the lifesaving belt 103 is opened through the controller, and liquid nitrogen in the high-pressure gas tank 108 is gasified after flowing through the electromagnetic valve, so that the lifesaving belt 103 is filled.
In the embodiment, the underwater lifesaving robot 1 further comprises a target recognition module running on a processor of the controller, wherein the target recognition module recognizes a human body in water according to an image shot by the camera 104, and compares the image feature of the human body with an image feature of drowned people provided by the server according to the image feature of the human body, so as to judge whether the drowned people are a rescue target.

Claims (8)

1. An underwater lifesaving robot is characterized by comprising a main body (101), a human body supporting seat (105), a lifesaving belt (103) and lifesaving fish (102), wherein the front end of the lifesaving belt (103) is connected with the lifesaving fish (102);
a controller, a first wireless communication module and a GPS module which are connected with the controller are arranged in the main body (101), a camera (104) is arranged at the head of the main body, and the camera (104) is connected with the controller through a data line;
the main body (101) is provided with a first turbine (106) for providing ascending and descending power in water, the tail part of the main body (101) is provided with a second turbine (107) for providing advancing, backing and steering power in water, and the control ends of the first turbine (106) and the second turbine (107) are respectively connected with the controller;
the tail end of the life belt (103) is connected with one side of the human body supporting seat (105);
the other side of the human body supporting seat (105) is provided with a mechanical claw (111) used for grasping and fixing the front end of the life belt, and a mechanical claw control unit is connected with the controller.
2. An underwater lifesaving robot as claimed in claim 1, wherein the lifesaving fish (102) comprises a fish-imitating body (1021) and a propeller thruster (1022) provided at the tail of the fish-imitating body, the lifesaving fish tail being provided with a traction part (1023) for connecting and towing the lifesaving belt;
a microprocessor and a second wireless communication module connected with the microprocessor are arranged in the fish-imitating body (1021), and the control end of a driving motor of the propeller thruster (1022) is connected with the control signal output end of the microprocessor;
the longitudinal axis of the propeller (1022) is connected with the output shaft of the steering engine (1024), and the control end of the steering engine (1024) is connected with the control signal output end of the microprocessor;
under the driving of the steering engine, the advancing direction of the life-saving fish is controlled by adjusting the angle between the axis of the propeller thruster and the central line of the fish-imitating body.
3. An underwater lifesaving robot as claimed in claim 2, wherein the underwater lifesaving robot (100) further comprises a first geomagnetic sensor connected to a controller thereof.
4. The underwater lifesaving robot of claim 3, wherein a receiving room (109) for receiving the lifesaving belt and the lifesaving fish is provided at a side where the human body supporting base (105) is connected to the lifesaving belt.
5. The underwater lifesaving robot of claim 4, wherein the lifesaving belt (103) is a hollow airtight belt-shaped structure, the inflation end of the lifesaving belt (103) is connected with the high pressure gas tank (108) through an electromagnetic valve, and the control end of the electromagnetic valve is connected with the control signal output end of the controller.
6. An underwater rescue robot as claimed in claim 5, characterized in that the rescue fish (102) further comprises a second geomagnetic sensor electrically connected to the microprocessor.
7. The drowning person rescue method of an underwater rescue robot according to any one of claims 2 to 6, comprising the steps of:
1) detecting drowning of a human body in water, determining the position of a drowning person when the drowning phenomenon of the human body is detected, and executing the step 2), otherwise, repeatedly executing the step 1);
2) throwing an underwater lifesaving robot at a rescue place according to the position of drowned people;
3) controlling the underwater lifesaving robot to run to the position of drowned people in water, acquiring images of the people at the position, and judging and confirming whether the underwater lifesaving robot is a rescue target;
4) controlling the underwater lifesaving robot to enable a human body supporting seat of the underwater lifesaving robot to be aligned to the back of a drowning person;
5) controlling the rescue fish to draw the rescue belt to surround the body of the drowned person;
6) the underwater lifesaving robot is controlled to drive to the water surface, so that the head of the drowning person is exposed out of the water surface.
8. A human body drowning detection method based on a human body image in water is characterized by comprising the following steps:
step 1: shooting to obtain an image of a human body in water;
step 2: carrying out human body target detection on the image by using a YOLOv5s model to obtain a human body positioning frame;
and step 3: recognizing the human body posture in the human body positioning frame by adopting an Alphapos model to obtain a skeleton key point and determine the coordinate of the skeleton key point;
and 4, step 4: calculating to obtain the body posture characteristics of the human body according to the coordinates of the key points of the human skeleton;
and 5: judging whether drowning occurs according to the body state characteristics of the human body, judging the length of drowning time if drowning occurs, and giving an alarm.
CN202210227619.0A 2022-03-08 2022-03-08 Underwater lifesaving robot and rescuing method Active CN114560059B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210227619.0A CN114560059B (en) 2022-03-08 2022-03-08 Underwater lifesaving robot and rescuing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210227619.0A CN114560059B (en) 2022-03-08 2022-03-08 Underwater lifesaving robot and rescuing method

Publications (2)

Publication Number Publication Date
CN114560059A true CN114560059A (en) 2022-05-31
CN114560059B CN114560059B (en) 2023-02-03

Family

ID=81717342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210227619.0A Active CN114560059B (en) 2022-03-08 2022-03-08 Underwater lifesaving robot and rescuing method

Country Status (1)

Country Link
CN (1) CN114560059B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116434081A (en) * 2023-04-25 2023-07-14 广东工业大学 Underwater robot control management method and system based on 5G+ cloud edge end

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150307172A1 (en) * 2014-04-29 2015-10-29 James Ng Robotic Drowning Rescue System
CN107351992A (en) * 2017-07-16 2017-11-17 汤庆佳 Rescue mode and its system under a kind of intelligent water based on unmanned plane
CN111191486A (en) * 2018-11-14 2020-05-22 杭州海康威视数字技术股份有限公司 Drowning behavior recognition method, monitoring camera and monitoring system
CN112906545A (en) * 2021-02-07 2021-06-04 广东省科学院智能制造研究所 Real-time action recognition method and system for multi-person scene
CN113158962A (en) * 2021-05-06 2021-07-23 北京工业大学 Swimming pool drowning detection method based on YOLOv4
CN113291440A (en) * 2021-06-04 2021-08-24 大连海事大学 Water surface rescue method and device for unmanned ship capable of flying
CN113306683A (en) * 2021-06-03 2021-08-27 娄少回 Self-binding water lifesaving equipment and water lifesaving method thereof
CN113581421A (en) * 2021-07-30 2021-11-02 厦门大学 Robot for underwater rescue
CN114132459A (en) * 2021-12-07 2022-03-04 浙江海洋大学 Controllable submersible self-propelled U-shaped power life buoy, control system and control method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150307172A1 (en) * 2014-04-29 2015-10-29 James Ng Robotic Drowning Rescue System
CN107351992A (en) * 2017-07-16 2017-11-17 汤庆佳 Rescue mode and its system under a kind of intelligent water based on unmanned plane
CN111191486A (en) * 2018-11-14 2020-05-22 杭州海康威视数字技术股份有限公司 Drowning behavior recognition method, monitoring camera and monitoring system
CN112906545A (en) * 2021-02-07 2021-06-04 广东省科学院智能制造研究所 Real-time action recognition method and system for multi-person scene
CN113158962A (en) * 2021-05-06 2021-07-23 北京工业大学 Swimming pool drowning detection method based on YOLOv4
CN113306683A (en) * 2021-06-03 2021-08-27 娄少回 Self-binding water lifesaving equipment and water lifesaving method thereof
CN113291440A (en) * 2021-06-04 2021-08-24 大连海事大学 Water surface rescue method and device for unmanned ship capable of flying
CN113581421A (en) * 2021-07-30 2021-11-02 厦门大学 Robot for underwater rescue
CN114132459A (en) * 2021-12-07 2022-03-04 浙江海洋大学 Controllable submersible self-propelled U-shaped power life buoy, control system and control method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马敬奇等: "基于AlphaPose优化模型的老人跌倒行为检测算法", 《计算机应用》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116434081A (en) * 2023-04-25 2023-07-14 广东工业大学 Underwater robot control management method and system based on 5G+ cloud edge end

Also Published As

Publication number Publication date
CN114560059B (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN107571977B (en) FPGA-based autonomous lifesaving system and lifesaving method for small and medium water areas
CN103057678B (en) The autonomous navigation of benthic organism hauls robot and man-machine coordination fishing operation system
CN109522793A (en) More people&#39;s unusual checkings and recognition methods based on machine vision
CN110200598A (en) A kind of large-scale plant that raises sign exception birds detection system and detection method
CN113291440B (en) Water surface rescue method and device for unmanned ship capable of flying
CN114560059B (en) Underwater lifesaving robot and rescuing method
CN111562791A (en) System and method for identifying visual auxiliary landing of unmanned aerial vehicle cooperative target
CN109683629A (en) Unmanned plane electric stringing system based on integrated navigation and computer vision
CN114735165B (en) Intelligent underwater lifesaving system and drowning detection and rescue method
CN112357030B (en) A water quality monitoring machine fish for ocean or inland river lake
CN111746728B (en) Novel overwater cleaning robot based on reinforcement learning and control method
CN116255908B (en) Underwater robot-oriented marine organism positioning measurement device and method
CN108674602A (en) Rescue at sea system
CN111275924B (en) Unmanned aerial vehicle-based child drowning prevention monitoring method and system and unmanned aerial vehicle
CN106094829A (en) A kind of autonomous type Stichopus japonicus fishes for robot system and method
CN110626474A (en) Man-machine cooperative intelligent life buoy and use method thereof
CN111252212B (en) Automatic rescue method and system for multiple drowning people by cooperation of navigable lifesaving device and unmanned aerial vehicle
CN114005021A (en) Laser vision fusion based unmanned inspection system and method for aquaculture workshop
CN112345531B (en) Transformer fault detection method based on bionic robot fish
CN102156994A (en) Joint positioning method of single-view unmarked human motion tracking
CN210681094U (en) Man-machine cooperation intelligent life buoy
CN113313757B (en) Cabin passenger safety early warning algorithm based on monocular ranging
CN109625218B (en) Unmanned monitoring underwater aquaculture robot system based on solar charging
KR20230016730A (en) An automatic landing system to guide the drone to land precisely at the landing site
CN114724177B (en) Human body drowning detection method combining Alphapos and YOLOv5s models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant