CN115661766A - Intelligent ship safety monitoring method and system based on deep learning - Google Patents

Intelligent ship safety monitoring method and system based on deep learning Download PDF

Info

Publication number
CN115661766A
CN115661766A CN202211350178.XA CN202211350178A CN115661766A CN 115661766 A CN115661766 A CN 115661766A CN 202211350178 A CN202211350178 A CN 202211350178A CN 115661766 A CN115661766 A CN 115661766A
Authority
CN
China
Prior art keywords
crew
target
model
face
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211350178.XA
Other languages
Chinese (zh)
Inventor
俞子俊
刘晋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN202211350178.XA priority Critical patent/CN115661766A/en
Publication of CN115661766A publication Critical patent/CN115661766A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent ship safety monitoring method and system based on deep learning, and belongs to the technical field of ship safety monitoring. The method comprises the following steps: acquiring data acquired by monitoring equipment on a ship, inputting the acquired image data into a target detection model, and executing a subsequent program according to a detection result; inputting the collected video data into a behavior recognition model to recognize the behavior of the crew; if the event is an emergency event, directly generating an alarm; if the event is a common event, tracking a target of a crew involved in the event, and acquiring the face data of the crew by using a face detection model; and establishing a face recognition network, comparing the collected personnel-involved crew face images with images in a crew face database, confirming the identity of the crew, storing the identity of the crew as evidence with the data collected in the steps, and giving an alarm for prompting. The invention can monitor various events and complete the whole process from event discovery to evidence search by using a detection algorithm. The ship safety monitoring range is more comprehensive, and the detection efficiency is higher.

Description

Intelligent ship safety monitoring method and system based on deep learning
Technical Field
The invention relates to the technical field of ship safety monitoring, in particular to an intelligent ship safety monitoring method and system based on deep learning.
Background
Safety has long been the first requirement of ships during sea operations. The guarantee of the safety of the ship is the responsibility and obligation of each crew, and the guarantee of the behavior and personal safety of the crew is also an important link for guaranteeing the safety of the ship. The current safety management work is mainly carried out in a manual supervision mode, potential safety hazards which do not cause accidents are investigated and early warned by browsing data of monitoring equipment, and the occurred safety accidents are investigated and chased. With the development of times, the defects of the traditional mode are gradually shown, so that not only is the working efficiency low, but also a large amount of manpower and material resources are consumed, and the safety management of the modern intelligent ship is difficult to meet. How to guarantee the behavior safety of the crew and unifying the safety management of the intelligent ship is a problem which is highly valued and needs to be solved urgently in the current shipping field.
At present, with the continuous development and popularization of the artificial intelligence technology, more and more ship companies and scientific research institutions are dedicated to integrating the artificial intelligence technology into a ship safety monitoring system. Chinese patent publication No. CN113657201a discloses a crew behavior monitoring and analyzing method, device, equipment and storage medium, which are used for monitoring the behavior of a crew in time, identifying the behavior of the crew and giving an alarm in time. Chinese patent publication No. CN113486843A discloses multi-scene crew unsafe behavior detection and release based on improved YOLOv3, which can realize detection of 6 unsafe behaviors of crew in different scenes. Chinese patent publication No. CN114419607A discloses a method and a system for detecting non-standard behaviors in a ship cockpit, which are beneficial to real-time early warning and stopping of the non-standard behaviors in the ship cockpit. With respect to the systems and methods related to monitoring ship safety disclosed in the above patents, there is a certain help for detecting ship safety, but the following 3 problems still exist:
the monitoring range is not comprehensive enough. The technology disclosed by the Chinese patent publication No. CN113486843A only detects 6 unsafe behaviors of the crew; the technology disclosed by Chinese patent publication No. CN114419607A is used for detecting non-standard behaviors in a ship cockpit. The prior technical invention related to ship safety detection is limited to detecting the behavior of a crew, and the actual ship safety monitoring range not only comprises the violation behavior or unsafe behavior of the crew, but also considers the personal safety of the crew, the environment safety on the ship and the like.
The monitoring process is not complete enough. One of the purposes of designing the intelligent ship safety monitoring system is to automate the monitoring process from event discovery to evidence search and event disposal, and except that the event disposal must depend on manpower, the former process can be automatically realized by a machine by using an artificial intelligence technology. The prior art only automates the process of discovering events and ignores evidence searches. At present, the automation of evidence search can be realized by combining a target tracking technology and a pedestrian retrieval technology.
The monitoring efficiency is to be improved. The technology disclosed by Chinese patent publication No. CN113657201A utilizes an image classification model of the behavior of the crew to identify the behavior of the crew; the technology disclosed by the Chinese patent publication No. CN113486843A improves a target detection model YOLOv3 network to detect the behavior of the crew. At present, the behavior recognition technology based on video is more and more mature, but the monitoring method of the prior art still stays at the stage of recognizing the behaviors of people by using image classification and target detection, so that the monitoring efficiency is not improved, and the automation level of the whole process is reduced.
In order to promote the development of ship intellectualization and realize the high-efficiency fusion of ship safety management and artificial intelligence technology, a complete, comprehensive and high-efficiency intelligent ship safety monitoring system is constructed, which is a problem to be solved with high attention and urgent need in the current shipping field.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an intelligent ship safety monitoring method and system based on deep learning, and mainly solves the three problems of incomplete monitoring range, incomplete monitoring process and low monitoring efficiency in the conventional intelligent ship safety monitoring system.
In order to achieve the purpose, the invention provides an intelligent ship safety monitoring method based on deep learning, which comprises the following steps:
(1) Constructing a deep learning model and a required training data set, wherein the deep learning model comprises a target detection model, a behavior recognition model and a pedestrian retrieval model, and completing model training by using the training data set;
(2) Acquiring data collected by monitoring equipment on a ship, wherein the data comprises image data and video data;
(3) Inputting the acquired image data into the target detection model, identifying a target in the image data, and executing the step 5 if a direct state target exists in a detection result; if the detection result has a behavior state target, executing the step 4;
(4) Inputting the collected video data into the behavior recognition model, and recognizing the behavior of the crew in the video data;
(5) If the event is an emergency event, directly generating an alarm; if the event is a general event, executing step 6;
(6) Tracking a target of the ship crew involved in the accident, acquiring the face data of the ship crew by using a face detection model in the tracking process, and stopping tracking after a clear face image of the ship crew involved in the accident is acquired;
(7) And establishing a face recognition network, comparing the collected face image of the crew involved in the event with the image in the crew face database, confirming the identity of the crew, storing the identity of the crew as an evidence with the data collected in the step, and giving an alarm.
Further, the target detection model uses a YOLOv5 model; the behavior recognition model uses a SlowFast model; and the CAL model used by the pedestrian re-identification model.
Further, the direct state target comprises at least one of a safety helmet, a fire source and a person falling to the ground, and the occurrence of an event can be judged when the target exists at a certain moment.
Further, the behavior state target includes at least one of a cigarette, a mobile phone or a fighting state person, and the target exists at a certain moment and needs to be further detected through the behavior recognition model.
Further, the emergency event includes at least one of a fire source being discovered and an important station being absent from duty.
Further, the general events include at least one of non-wearing of safety helmets and smoking of important posts during engineering work.
Further, the tracking the target of the crew involved in the accident in the step 6 specifically includes:
(6.1) single-camera tracking: the method comprises the following steps that (1) crewman target tracking under the same camera scene can be carried out by utilizing a single-target tracking algorithm, image sequences of crewman are collected, and if clear crewman face images are not collected before the crewman target leaves a camera view range, cross-camera tracking is carried out;
(6.2) cross-camera tracking: firstly, camera point locations of a nearby area are determined, a scene where a crew appears is detected through the target detection model, tracking is carried out through a single-target tracking algorithm, a continuous crew image sequence is collected, the collected image sequence is matched with a sequence of a previous camera through the pedestrian re-identification model, and the sequence with the highest score is determined as a sequence of the crew involved in an accident;
and (6.3) acquiring crew face data by using the face detection model, and stopping tracking after a clear face image of the crew involved in the accident is acquired.
Furthermore, the single target tracking algorithm uses a KCF single target tracking algorithm, and the face detection model uses an LFFD model.
Further, the face recognition network in step 7 uses FaceNet.
The invention also provides an intelligent ship safety monitoring system based on deep learning, which comprises:
a monitoring data sampling module: the system is used for acquiring data acquired by monitoring equipment on a ship, wherein the data comprises image data and video data;
an on-board target detection module: the system comprises a target detection model, a target recognition module and a target recognition module, wherein the target detection model is used for carrying the target detection model, inputting the acquired image data into the target detection model and recognizing a target in the image data;
crew behavior identification module: the behavior recognition module is used for carrying a behavior recognition model, inputting the collected video data into the behavior recognition model and recognizing the behavior of the crew in the video data;
the crew target tracking module: the system is used for tracking a target of a shipman involved in an accident, acquiring the human face data of the shipman by using a human face detection model in the tracking process, and stopping tracking after a clear human face image of the shipman involved in the accident is acquired;
crew identity matching module: the method is used for carrying a face recognition network, comparing collected personnel involved face images with images in a crew face database, confirming the identity of a crew, storing the identity of the crew and the data collected in the previous step as evidence, and giving an alarm.
The invention has the beneficial effects that:
1. comprehensive monitoring range: the invention can be used for monitoring various events happening on the ship, and in addition, even if the shipman involved in an accident enters the cabin from the deck and breaks away from the visual field range of the original camera, the method can also realize the comprehensive tracking of the shipman involved in the accident through the cross-camera tracking technology.
2. The complete automatic process: firstly, detecting unsafe events through a target detection algorithm and a behavior recognition algorithm; tracking the targets of the personnel involved in the shipments by using a target tracking algorithm, and realizing cross-camera tracking by matching with a target detection and pedestrian re-identification algorithm if necessary; in the tracking process, capturing a face image of a shipman involved in an accident by a face detection algorithm; and finally, completing identity confirmation of the crew through a face feature extraction network. The whole process from event discovery to evidence search is realized by a computer.
3. Efficient monitoring capability: the invention departs from the prior method of identifying the behavior of the crew by using image classification or target detection, and directly identifies the behavior of the crew by using a behavior identification model based on video. The main function of safety monitoring is realized by utilizing a target detection and behavior recognition algorithm, and target crew retrieval is realized by matching with a target tracking and face matching algorithm. The model installed in the system is a representative model having a relatively high effect in each region.
Based on the advantages and the characteristics, the method can be widely applied to the field of ship safety monitoring. The invention can promote the development of the field of intelligent ships, particularly the safety management aspect of the ships, can find potential safety hazards in time, effectively reduce the total quantity of ship safety accidents, ensure the safety of the ships and shipmen and promote the stable development of the safety production of the shipping industry.
Drawings
Fig. 1 is a schematic flow chart of an intelligent ship safety monitoring method based on deep learning according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of an intelligent ship safety monitoring system based on deep learning according to an embodiment of the present invention.
Fig. 3 is a schematic flow diagram of a monitoring process of the intelligent ship safety monitoring method based on deep learning according to the embodiment of the invention.
Fig. 4 is a schematic diagram of the YOLOv5 model architecture used in the embodiments of the present invention.
Fig. 5 is a schematic diagram of a SlowFast model architecture used in an embodiment of the present invention.
Fig. 6 is a schematic diagram of an LFFD model architecture used in an embodiment of the invention.
FIG. 7 is a schematic diagram of a CAL model training architecture used in an embodiment of the present invention.
Fig. 8 is a schematic diagram of a FaceNet network framework used in embodiments of the present invention.
Detailed Description
In order to clearly clarify the technical solutions and features of the present invention, the present invention will be described in more detail with reference to the accompanying drawings and the detailed description thereof.
As shown in fig. 1 and fig. 2, the present embodiment provides a method and a system for monitoring safety of an intelligent ship based on deep learning, where the method includes the following steps:
s101, constructing a deep learning model and a required training data set, wherein the deep learning model comprises a target detection model, a behavior recognition model and a pedestrian retrieval model, and completing model training by using the training data set;
and (3) data set construction: image data and video data of the shipborne monitoring camera need to be collected, and data sets corresponding to different models are respectively constructed.
Model training: and respectively training a target detection model M1 based on a camera image, a behavior recognition model M2 based on a video and a pedestrian re-recognition model M3 based on the video by using the data set obtained by the data set construction module.
The data sets used by the models M1 and M2 are constructed by processing data recorded by a shipborne monitoring camera, the resolution of the camera is 1920 x 1080, the average duration of the collected video clips is 5 seconds, and the occurrence of an action can be confirmed in 5 seconds; m3 is trained using the public pedestrian re-identification dataset.
The target detection model uses the YOLOv5 model, as shown in fig. 4. The YOLOv5 model is one of representatives of the YOLO (young Only Look Once) series of models, from YOLOv1 to YOLOv5, which is a great improvement in many respects, and several improvement techniques are mentioned in YOLOv 5:
firstly, the Focus structure that model backsbone adopted carries out the section operation to the picture, and the effect is similar to down-sampling, and the effect of this structure lies in when guaranteeing that the characteristic is not lost, reduces the parameter and the calculated amount of model.
Secondly, a CSP (Cross Stage Partial) structure is only applied to the Backbone in YOLOv4, and the structure is expanded into two forms to be respectively applied to the Backbone and the Neck by YOLOv5, so that the calculation and memory cost of the model is greatly reduced, and the accuracy of the model is ensured.
Thirdly, a self-adaptive picture scaling method is used at the input end of the model, the original scaling method can cause the existence of a large amount of information redundancy problems, so that the reasoning speed of the model is influenced, and the self-adaptive picture scaling method can add a minimum amount of black edges to the scaled picture, so that the information redundancy is avoided, and the reasoning speed of the model is improved.
The behavior recognition model used the SlowFast model. As shown in fig. 5, the SlowFast model is a two-branch structure model for video recognition, where the Slow branch is responsible for capturing spatial semantic information, the branch running at a lower frame rate and a slower refresh rate, and the Fast branch is responsible for capturing rapidly changing motion, the branch running at a Fast refresh rate and a high temporal resolution, the two branches being fused by a transverse connection.
The Slow branch can be any kind of convolution model, the key concept of which is the large time span s of the input frame, meaning that only one of the frames is processed per s number of frames, with an s value of typically 16. Assuming t frames after the Slow branch sampling, the original clip length is s x t frames.
The Fast branch is parallel to the Slow branch, which has three properties. One is a high frame rate, interpreted as faster sampling of the Fast branch, sampling in steps of s/n, where n is the frame rate ratio of the two branches, the Fast branch sampling density is n times that of the Slow branch, and n is typically 8. The second is the high temporal resolution feature, which shows that in the Fast branch, no temporal down-sampling layer is used. Thirdly, the Fast branch becomes very light by reducing the channel capacity, and is m times of the number of channels of the Slow branch, and the value of m is usually 1/8, which also means that the Fast branch has higher calculation efficiency.
The Slow branch and the Fast branch are fused through transverse connection, and the transverse connection is a common technology for fusing spatial resolution and semantics of different layers.
The pedestrian re-identification model uses an CAL model, the CAL model is a model for re-identifying the pedestrian, and the CAL model has good accuracy for identifying different pedestrians wearing the same. Because the clothing of the crewman is basically uniform, the traditional pedestrian heavy identification model is greatly influenced by factors such as clothing appearance and the like, so that the pedestrian heavy identification model cannot be suitable for heavy identification of the crewman. The CAL model is mainly characterized in that features irrelevant to clothing are mined from an original RGB image through the prediction capability of a penalty model to the clothing based on a clothing-based adaptive Loss function (CAL). As shown in fig. 7, there is an additional clothing classifier in the CAL model in addition to the identity classifier. During the learning process, the loss of resistance based on clothing forces the backbone network to mine clothing-independent features.
S102, acquiring data collected by monitoring equipment on a ship, wherein the data comprises image data and video data;
in specific implementation, the acquired data is video data, the picture data is obtained by intercepting the video data, and data of different data types are input into the corresponding models for detection. The sampled image data has a resolution size of 1920 x 1080 and the video segments average 20 seconds in duration.
S103, inputting the acquired image data into the target detection model, identifying a target in the image data, and executing a step S105 if a direct state target exists in a detection result; if the detection result has a behavior state target, executing step S104;
in specific implementation, the target detection model is an image-based target detection model, the position and the type of a target in an image are detected by identifying image data, and different processes are executed according to different detected targets.
The targets identified by the target detection model are divided into two types, namely:
direct state target T1. The occurrence of the event can be judged according to the existing state of the targets at a certain moment, and the direct state targets at least comprise safety helmets, fire sources and people falling to the ground.
Behavioral state goal T2. The existing state of the targets at a certain moment is not enough to confirm the occurrence of the event, and further detection needs to be carried out through a behavior recognition model, wherein the behavior state targets at least comprise cigarettes, mobile phones and fighting state personnel.
The two types of targets are classified according to whether the existence state of the target at a certain moment can judge the occurrence of an event. In the event monitored by the system, most of illegal behaviors of the crew need to be judged by performing behavior analysis on videos, for example, the behavior of the crew sleeping on the post is judged, if the image at a certain moment is only identified through the target detection model, the eye-closing state of the crew when blinking can be mistaken by the model as that the crew is sleeping, and therefore, the behavior identification model is needed to perform further analysis on the videos. And when the target detection model identifies that a fire source exists in the image, the dangerous event can be directly judged to exist.
S104, inputting the collected video data into the behavior recognition model, and recognizing the behavior of the crew in the video data;
when the behavior recognition model is specifically implemented, the behavior recognition model is based on video analysis and is mainly used for recognizing irregular behaviors of crews, and the irregular behaviors at least comprise smoking, mobile phone playing and fighting. The behavior recognition model receives the sampled video data and further detects and recognizes the behavior events in the video by combining with the target information generated by the target detection model.
S105, if the event is an emergency, directly generating an alarm; if the event is a general event, go to step S106;
in specific implementation, events monitored by the system are divided into two types, namely:
emergency event E1. The events may cause more serious accidents, management personnel need to be reminded in time and solved, evidence exploration work can be carried out after the events are solved, and the emergency events at least comprise finding of a fire source and absence of duty at important posts.
A general event E2. The event has certain potential safety hazard, management personnel need to be reminded in time and the event occurrence process needs to be recorded, common events are mostly illegal behaviors of crews, and at least include that a safety helmet and important posts are not worn for smoking during engineering operation.
Different types of events and different handling processes are also achieved, for example, a fire source is found, the event occurs, an alarm needs to be given in time, and related personnel are dispatched to handle the event in time; when the safety helmet is not worn during engineering operation, evidence can be firstly explored, video data is kept, meanwhile, the information of the ship crew is matched with the information in the identity library by using a face recognition technology, and the identity of the ship crew is ascertained.
S106, tracking a target of the crew involved in the accident, acquiring the face data of the crew by using a face detection model in the tracking process, and stopping tracking after obtaining a clear face image of the crew involved in the accident;
when the method is specifically implemented, the tracking of the targets of the personnel involved in the shipments comprises the following steps:
and tracking by a single camera. The crew target tracking under the same camera scene can be carried out by utilizing a single-target tracking algorithm, the image sequence of the crew is collected, and if a clear crew face image is not collected before the crew target leaves the camera view range, the cross-camera tracking is carried out.
Tracking across cameras. Firstly, camera point locations of a nearby area are determined, a scene where a crew appears is detected through a target detection model, tracking is carried out through a single-target tracking algorithm, a continuous crew image sequence is collected, the collected image sequence is matched with a sequence of a previous camera through a pedestrian re-recognition model, the sequence with the highest score is determined as a crew-involved sequence, and tracking is continued until a clear crew face image is collected.
The single target tracking algorithm uses a KCF single target tracking algorithm, and the basic idea is as follows: and searching a filtering template, and convolving the next frame image with the selected filtering template, wherein the region with the maximum response is the prediction target. A tracking algorithm based on a Kernel Correlation Filter (KCF) is a method adopted by the module, and utilizes properties such as circular matrix diagonalization to enable target tracking to be high-speed and accurate.
The face detection model uses an LFFD model, the LFFD model is a novel single-target detection model suitable for targets such as faces, pedestrians, vehicles and the like, and the model uses a Receptive Field (received Field) to replace Anchors and is also an Anchor-free method. The main advantages are that: one is that by adding more convolutional layers, larger scale targets can be covered, with limited added delay; secondly, the capability of detecting small targets is outstanding; thirdly, the network structure is common, the network structure can be deployed to the equipment at the main stream end side, and the system has good adaptability. The network structure of the LFFD model is shown in FIG. 6, the model mainly comprises a tiny part, a small part, a medium part and a large part, 8 paths of feature maps are respectively extracted from the basic model structure to detect the human face from small to large, and the detection module is divided into two classifications and boundary regression.
And S107, establishing a face recognition network, comparing the collected face image of the crew involved in the event with the image in the crew face database, confirming the identity of the crew, storing the identity of the crew and the data collected in the previous step as evidence, and giving an alarm.
In specific implementation, a face recognition network is established, the identity of a crew is confirmed by matching the face image of the crew involved in the event and the face image characteristics in the crew face database, the data obtained in the past are stored as evidence, and meanwhile, an alarm is given. The face recognition network uses FaceNet. FaceNet is a general recognition network that can be used for face verification, face recognition and face clustering, which maps images to euclidean space through deep convolutional neural network learning, and the spatial distance is directly related to the face image similarity: different images of the same person are at a small spatial distance, and images of different persons are at a large spatial distance. After the feature mapping relation of the obtained image is determined, the face recognition is a K-NN classification problem. The overall framework of FaceNet is shown in fig. 8, where the body model employs a deep network inclusion respet-v 2.
According to an embodiment of another aspect, there is also provided a smart ship safety monitoring system based on deep learning, including:
a monitoring data sampling module: the system is used for acquiring data acquired by monitoring equipment on a ship, wherein the data comprises image data and video data;
an on-board target detection module: the system comprises a target detection model, a target recognition module and a target recognition module, wherein the target detection model is used for carrying the target detection model, inputting the acquired image data into the target detection model and recognizing a target in the image data;
crew behavior identification module: the behavior recognition module is used for carrying a behavior recognition model, inputting the collected video data into the behavior recognition model and recognizing the behavior of the crew in the video data;
the crew target tracking module: the system is used for tracking a target of a shipman involved in an accident, acquiring the human face data of the shipman by using a human face detection model in the tracking process, and stopping tracking after a clear human face image of the shipman involved in the accident is acquired;
crew identity matching module: the method is used for carrying a face recognition network, comparing collected personnel involved face images with images in a crew face database, confirming the identity of a crew, storing the identity of the crew and the data collected in the previous step as evidence, and giving an alarm.
Finally, it should be noted that: the above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any alternatives, modifications, or improvements according to the technical solutions and the inventive concepts of the present invention should be covered within the scope of the present invention.

Claims (10)

1. A smart ship safety monitoring method based on deep learning is characterized by comprising the following steps:
(1) Constructing a deep learning model and a required training data set, wherein the deep learning model comprises a target detection model, a behavior recognition model and a pedestrian retrieval model, and completing model training by using the training data set;
(2) Acquiring data collected by monitoring equipment on a ship, wherein the data comprises image data and video data;
(3) Inputting the acquired image data into the target detection model, identifying a target in the image data, and executing the step 5 if a direct state target exists in a detection result;
if the detection result has a behavior state target, executing the step 4;
(4) Inputting the collected video data into the behavior recognition model, and recognizing the behavior of the crew in the video data;
(5) If the event is an emergency event, directly generating an alarm; if the event is a general event, executing step 6;
(6) Tracking a target of the ship crew involved in the accident, acquiring the face data of the ship crew by using a face detection model in the tracking process, and stopping tracking after a clear face image of the ship crew involved in the accident is acquired;
(7) And establishing a face recognition network, comparing the collected face image of the crew involved in the event with the image in the crew face database, confirming the identity of the crew, storing the identity of the crew as an evidence with the data collected in the step, and giving an alarm.
2. The intelligent ship safety monitoring method based on deep learning of claim 1, wherein: the target detection model uses a YOLOv5 model; the behavior recognition model uses a SlowFast model; and the CAL model used by the pedestrian re-identification model.
3. The intelligent ship safety monitoring method based on deep learning of claim 1, wherein: the direct state target comprises at least one of a safety helmet, a fire source and a person falling to the ground, and the occurrence of an event can be judged if the target exists at a certain moment.
4. The intelligent ship safety monitoring method based on deep learning of claim 1, wherein: the behavior state target comprises at least one of a cigarette, a mobile phone or a fighting state person, and the target exists at a certain moment and needs to be further detected through the behavior recognition model.
5. The intelligent ship safety monitoring method based on deep learning of claim 1, wherein: the emergency event includes at least one of a discovery of a fire source and an important duty absence.
6. The intelligent ship safety monitoring method based on deep learning of claim 1, wherein: the general events comprise at least one of non-wearing of safety helmets and smoking of important posts during engineering work.
7. The intelligent ship safety monitoring method based on deep learning of claim 1, wherein the tracking of the target of the crew involved in the accident in the step 6 is specifically as follows:
(6.1) single-camera tracking: the method comprises the following steps that (1) crewman target tracking under the same camera scene can be carried out by utilizing a single-target tracking algorithm, image sequences of crewman are collected, and if clear crewman face images are not collected before the crewman target leaves a camera view range, cross-camera tracking is carried out;
(6.2) cross-camera tracking: firstly, camera point locations of a nearby area are determined, a scene where a crew appears is detected through the target detection model, tracking is carried out through a single-target tracking algorithm, a continuous crew image sequence is collected, the collected image sequence is matched with a sequence of a previous camera through the pedestrian re-identification model, and the sequence with the highest score is determined as a sequence of the crew involved in an accident;
and (6.3) acquiring crew face data by using the face detection model, and stopping tracking after a clear face image of the crew involved in the accident is acquired.
8. The intelligent ship safety monitoring method based on deep learning of claim 1, wherein the single target tracking algorithm uses a KCF single target tracking algorithm, and the face detection model uses an LFFD model.
9. The intelligent ship safety monitoring method based on deep learning of claim 1, wherein the face recognition network in the step 7 uses FaceNet.
10. The utility model provides an intelligent boats and ships safety monitoring system based on deep learning which characterized in that includes:
a monitoring data sampling module: the system is used for acquiring data acquired by monitoring equipment on a ship, wherein the data comprises image data and video data;
an on-board target detection module: the system comprises a target detection model, a target recognition module and a target recognition module, wherein the target detection model is used for carrying the target detection model, inputting the acquired image data into the target detection model and recognizing a target in the image data;
crew behavior identification module: the behavior recognition module is used for carrying a behavior recognition model, inputting the collected video data into the behavior recognition model and recognizing the behavior of the crew in the video data; the crew target tracking module: the system is used for tracking a target of a shipman involved in an accident, acquiring the human face data of the shipman by using a human face detection model in the tracking process, and stopping tracking after a clear human face image of the shipman involved in the accident is acquired;
crew identity matching module: the method is used for carrying a face recognition network, comparing collected personnel involved face images with images in a crew face database, confirming the identity of a crew, storing the identity of the crew and the data collected in the previous step as evidence, and giving an alarm.
CN202211350178.XA 2022-10-31 2022-10-31 Intelligent ship safety monitoring method and system based on deep learning Pending CN115661766A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211350178.XA CN115661766A (en) 2022-10-31 2022-10-31 Intelligent ship safety monitoring method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211350178.XA CN115661766A (en) 2022-10-31 2022-10-31 Intelligent ship safety monitoring method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN115661766A true CN115661766A (en) 2023-01-31

Family

ID=84994745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211350178.XA Pending CN115661766A (en) 2022-10-31 2022-10-31 Intelligent ship safety monitoring method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN115661766A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071836A (en) * 2023-03-09 2023-05-05 山东科技大学 Deep learning-based crewman abnormal behavior detection and identity recognition method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071836A (en) * 2023-03-09 2023-05-05 山东科技大学 Deep learning-based crewman abnormal behavior detection and identity recognition method

Similar Documents

Publication Publication Date Title
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN108009473B (en) Video structuralization processing method, system and storage device based on target behavior attribute
CN108052859B (en) Abnormal behavior detection method, system and device based on clustering optical flow characteristics
CN109740420B (en) Vehicle law violation identification method and related product
CN109784150A (en) Video driving behavior recognition methods based on multitask space-time convolutional neural networks
Singh et al. Visual big data analytics for traffic monitoring in smart city
CN109977897A (en) A kind of ship's particulars based on deep learning recognition methods, application method and system again
CN106355154B (en) Method for detecting frequent passing of people in surveillance video
CN111680613B (en) Method for detecting falling behavior of escalator passengers in real time
CN112819068B (en) Ship operation violation behavior real-time detection method based on deep learning
CN112434828B (en) Intelligent safety protection identification method in 5T operation and maintenance
CN112434827B (en) Safety protection recognition unit in 5T operation and maintenance
KR102122850B1 (en) Solution for analysis road and recognition vehicle license plate employing deep-learning
CN108156406A (en) The information processing method and device of automobile data recorder
CN115661766A (en) Intelligent ship safety monitoring method and system based on deep learning
CN115171336A (en) Drowned protection system of beach control
Onim et al. Traffic surveillance using vehicle license plate detection and recognition in bangladesh
Aboah et al. Deepsegmenter: Temporal action localization for detecting anomalies in untrimmed naturalistic driving videos
CN113191273A (en) Oil field well site video target detection and identification method and system based on neural network
CN117423157A (en) Mine abnormal video action understanding method combining migration learning and regional invasion
Bravi et al. Detection of stop sign violations from dashcam data
CN115359416A (en) Intelligent early warning system for railway freight yard sky eye
CN115294519A (en) Abnormal event detection and early warning method based on lightweight network
CN113723258A (en) Dangerous goods vehicle image identification method and related equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination