CN116363578A - Ship closed cabin personnel monitoring method and system based on vision - Google Patents

Ship closed cabin personnel monitoring method and system based on vision Download PDF

Info

Publication number
CN116363578A
CN116363578A CN202310186103.0A CN202310186103A CN116363578A CN 116363578 A CN116363578 A CN 116363578A CN 202310186103 A CN202310186103 A CN 202310186103A CN 116363578 A CN116363578 A CN 116363578A
Authority
CN
China
Prior art keywords
personnel
face
image
monitoring
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310186103.0A
Other languages
Chinese (zh)
Inventor
张跃文
张金秋
王飞
张鹏
邹永久
姜兴家
杜太利
段绪旭
孙培廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202310186103.0A priority Critical patent/CN116363578A/en
Publication of CN116363578A publication Critical patent/CN116363578A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/109Time management, e.g. calendars, reminders, meetings or time accounting
    • G06Q10/1091Recording time for administrative or management purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Economics (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • General Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a ship closed cabin personnel monitoring method and system based on vision, wherein the method comprises the following steps: processing the monitoring image to obtain a face object, and extracting features of the face object; inputting the face recognition features into a personal identification model, wherein the personal identification model is used for classifying the face recognition features and outputting the comparison result of the face recognition features and the face features pre-stored in a personal data set; when the personnel identity information shows that the identity of the current face object is a worker, immediately performing personnel real-time tracking processing, wherein the personnel tracking processing comprises the steps of obtaining personnel entering time and personnel leaving time and calculating intermediate residence time; and when the middle residence time exceeds a preset working time threshold, performing personnel working overtime feedback. According to the invention, the identity of personnel in the closed cabin of the ship is judged according to the visual analysis face key points, so that the personnel state monitoring is specifically analyzed, and the intelligent personnel management is realized.

Description

Ship closed cabin personnel monitoring method and system based on vision
Technical Field
The invention relates to the technical field of intelligent operation and maintenance of ships, in particular to a ship closed cabin personnel monitoring method and system based on vision.
Background
With the development of ship intelligence, the need for monitoring ship cabin personnel is continuously increasing. The monitoring system for ship application at present can extract and store images of all positions of a ship in real time, but because the intelligent degree is low, the monitoring system can only screen and monitor video images by means of manpower, not only consumes manpower, but also can not automatically monitor and identify personnel, so that the management efficiency of the personnel of the whole ship is low.
Disclosure of Invention
In view of the shortcomings of the prior art, the invention provides a ship closed cabin personnel monitoring method and system based on vision. According to the invention, the identity of personnel in the closed cabin of the ship is judged according to the visual analysis face key points, so that the personnel state monitoring is specifically analyzed, and the intelligent personnel management is realized.
The invention adopts the following technical means:
a vision-based ship enclosure personnel monitoring method, comprising:
acquiring a monitoring image of a ship closed cabin personnel, processing the monitoring image to acquire a face object, and extracting features of the face object to acquire face recognition features;
inputting the face recognition features into a personal identification model, wherein the personal identification model is used for classifying the face recognition features, outputting a comparison result of the face recognition features and the face features pre-stored in a personal data set, and outputting personal identification information according to the comparison result;
when the personnel identity information shows that the identity of the current face object is a worker, immediately performing personnel real-time tracking processing, wherein the personnel tracking processing comprises the steps of obtaining personnel entering time and personnel leaving time and calculating intermediate residence time;
and when the middle residence time exceeds a preset working time threshold, performing personnel working overtime feedback.
Further, when the personnel identity information shows that the identity of the current face object is a non-working person, risk prompt is carried out on strangers.
Further, extracting features of the face object to obtain face recognition features, including:
performing enhancement processing on the monitoring image by using a multi-scale Retinex face image enhancement algorithm;
and extracting features based on the enhanced face image, wherein the extracted features can represent relevant characteristics of the face, and performing dimension reduction processing on the extracted features to generate face recognition features.
Further, the enhancement processing of the monitoring image by using a multi-scale Retinex face image enhancement algorithm comprises the following steps:
acquiring a monitoring image and a current light source image, and obtaining a reflection image according to the monitoring image and the current light source image, wherein the monitoring image is decomposed into a product of the light source image and the reflection image; the light source image is obtained by low-pass filtering the monitoring image.
Further, the personnel identity recognition model is a multi-task cascade convolutional neural network model, and the multi-task cascade convolutional neural network comprises an input layer, a P-Net sub-network, an R-Net sub-network and an O-Net sub-network; the working process of the multi-task cascade convolutional neural network model comprises the following steps:
performing scale transformation on the initial image through an image pyramid at the input layer;
generating a large number of candidate target area frames through a P-Net sub-network; performing first classification and boundary regression on all target area frames through the R-Net sub-network to exclude most area frames without targets;
and further judging and regressing the rest target area frames by using the O-Net sub-network to output a correct face detection prediction frame.
The invention also discloses a ship closed cabin personnel monitoring system based on vision, which comprises:
the face monitoring module is used for acquiring a monitoring image of a ship closed cabin personnel, processing the monitoring image to acquire a face object, and extracting features of the face object to acquire face recognition features; inputting the face recognition features into a personal identification model, wherein the personal identification model is used for classifying the face recognition features, outputting a comparison result of the face recognition features and the face features pre-stored in a personal data set, and outputting personal identification information according to the comparison result;
the real-time tracking module is used for immediately carrying out personnel real-time tracking processing when the personnel identity information shows that the identity of the current face object is a worker, wherein the personnel tracking processing comprises the steps of obtaining personnel entering time and personnel leaving time and calculating intermediate residence time;
and the feedback timeout module is used for feeding back the timeout of the personnel work when the middle residence time exceeds a preset work duration threshold value.
Further, the method further comprises the following steps:
and the risk early warning module is used for carrying out risk prompt on strangers when the personnel identity information shows that the identity of the current face object is a non-working person.
Compared with the prior art, the invention has the following advantages:
1. the invention provides an improved MTCNN algorithm, which is used for preprocessing the face images of people in and out in a ship closed cabin scene by combining a multi-scale Retinx face image enhancement algorithm in the early stage, and is convenient for the rapid identity monitoring of cabin workers by combining a characteristic pyramid structure, so that the positioning accuracy of a camera is improved.
2. The invention establishes a real-time tracking information module, and can record the time of the entering and exiting of the staff and the residence time in time. In order to better ensure the safety condition of a ship closed cabin, a ship cabin timely early warning mechanism module and a feedback staff assistant stay overtime condition are established, and risk early warning and alarming are timely carried out on strangers when identities are detected.
3. The invention applies the computer vision knowledge to the monitoring of the personnel in the closed cabin of the ship, is convenient for tracking the personnel information in real time, and early warns and prompts dangerous states in time, thereby improving the monitoring efficiency and the working efficiency of the personnel in the cabin.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort to a person skilled in the art.
Fig. 1 is a flow chart of a method for monitoring personnel in a ship closed cabin based on vision.
Fig. 2 is a schematic diagram of a model of a multi-task cascade convolutional neural network for face detection in accordance with the present invention.
FIG. 2 (a) is a schematic diagram of the structure of a Propos Network according to the present invention.
FIG. 2 (b) is a schematic diagram of the finer Network structure according to the present invention.
Fig. 2 (c) is a schematic diagram of an Out Network structure in the present invention.
Fig. 3 is a functional block diagram of a vision-based ship closed cabin personnel monitoring system.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
As shown in fig. 1, the invention provides a ship closed cabin personnel monitoring method based on vision, which comprises the following steps:
s1, acquiring a monitoring image of a ship closed cabin personnel, processing the monitoring image to acquire a face object, and extracting features of the face object to acquire face recognition features.
In the application, the face information is extracted from cameras distributed at the closed cabin of the ship.
S2, inputting the face recognition features into a personal identification model, wherein the personal identification model is used for classifying the face recognition features, outputting a comparison result of the face recognition features and the face features stored in a personal data set in advance, and outputting personal identification information according to the comparison result.
And detecting the face and the characteristic points of the face by utilizing the function of detecting the characteristic points of the face and the five faces by utilizing an MTCNN algorithm, detecting the characteristic points of the face and the face on a real-time video frame obtained by shooting through a camera lens, and obtaining final information, namely the coordinates of a face candidate frame and the coordinates of the characteristic points of the human eye position according to the detection, thereby realizing the next face positioning. After the detection is finished, the position information of the five face feature points is output through the last subnetwork O-Net of the MTCNN. The position coordinate values of the five feature points are stored in the form of a set of ordered data. Face detection is accomplished by comparing objects selected from the input video assets with a dataset created for face recognition. After the comparison, it classifies the faces according to the corresponding categories.
The method specifically comprises the following steps:
(1) by analyzing and comparing image or video perception information, multi-scale Retinex is employed
The (Multi-ScaleRetinex, MSR) image enhancement method is applied to a face detection algorithm of the Multi-task convolutional neural network MTCNN. MSR theory is introduced in image enhancement to remove the influence of light and restore the real human face. Multiple faces can be contained in the same or a frame of picture and can be identified at the same time. The face recognition and feature extraction method and program are as follows:
and (3) equipment identification: the basic theory of the retinex image enhancement algorithm is that it requires dividing an original image into a light source image and a reflection image, and enhancement is achieved by reducing the influence of the light source image on the reflection image.
S(x,y)=R(x,y)*L(x,y) (3-1)
Wherein L (x, y) represents incident light or a light source image; r (x, y) represents reflection characteristics or reflection light determining internal characteristics of an image; s (x, y) represents an original image. This decomposition can remove the effects of illumination in a given image by preserving reflectivity.
The algorithm divides the image into an illumination layer and a reflection layer by the Retinex theory, and processes the illumination layer which is easy to cause detail loss by the brightness self-adaption principle. The algorithm first obtains an illumination layer using adaptive gaussian filtering to remove halo artifacts, and then adaptively removes illumination of the illumination layer in a luminance-adaptive-based MSR process to preserve detail. And finally, carrying out contrast enhancement on the MSR result. The improved MSR algorithm firstly converts RGB images into HSV images, then performs tasks on V wave bands of an HSV model, then convolves the V wave bands by adopting an adaptive Gaussian filter, then calculates a visibility threshold and a weakening factor, and then performs adaptive gamma correction in output to obtain an enhancement result.
The R values obtained using MSR are as follows:
Figure BDA0004103947850000051
wherein R is the intensity of the weakened illumination, V is the channel V of the color space HSV, beta is a control factor, and the smaller the beta is, the closer the result is to the original image; the larger β, the more pronounced the detail. Therefore, we adaptively set β according to the local image content. In the dark region, β is smaller, making the MSR result similar to the original image. In bright areas, the larger the beta, the less light will be.
(2) Basic structure of MTCNN network
The MTCNN is a multitasking convolutional neural network detection method, combines the functions of face alignment and face feature point detection, adopts a cascading idea when the architecture of the whole network model, and further achieves the effect of detecting from coarse to fine. The MTCNN network model comprises three sub-networks, namely P-Net, R-Net and O-Net, which are respectively described below. The specific flow is shown in fig. 2.
P-Net: all feature extraction operations in the P-Net Network are convolution operations, as shown in FIG. 2 (a), and the training is performed by inputting the images resize to 12×12 and then into the P-Net. Firstly, carrying out convolution operation on a 12 multiplied by 12 image and a convolution kernel (Conv) with the size of 3 multiplied by 3 and the step length of 1 and the filling value of default to 0, and then carrying out Max Pooling with the range of 2 multiplied by 2 and the step length of 2 to obtain a characteristic diagram with the size of 5 multiplied by 5 and the channel number of 10; carrying out convolution operation on the feature map obtained in the previous step and a convolution kernel with the size of 3 multiplied by 3 and the step length of 1 to obtain a feature map with the size of 3 multiplied by 16; then carrying out the same operation to obtain a characteristic diagram of 1 multiplied by 32; and carrying out convolution with the convolution kernel size of 1 multiplied by 1 for three times on the characteristic diagram with the convolution kernel size of 1 multiplied by 32 to respectively obtain face classification prediction with the channel number of 2, bounding box regression prediction with the channel number of 4 and face landmark positioning prediction with the channel number of 10. The P-Net is used as the first convolution network in the overall model structure, and is mainly used for roughly acquiring a face position candidate window and regression vectors of a boundary box, calibrating the candidate window by using the regression of the boundary box, and then merging the highly overlapped candidate boxes by using Non-maximum suppression (Non-Maximum Suppression, NMS).
R-Net: the input data is an image of 24×24 size in size as shown in fig. 2 (b), and the face candidate frame is output through P-Net processing. The specific operation of the image in the R-Net network is similar to that of the P-Net network, except that the values of specific parameters are different, and a full-connection (FC) operation is added, which causes the output feature map to perform a distributed expression of the features. The R-Net network can reserve a small number of correct candidate windows in the face candidate windows output by the P-Net, discard most of the wrong face candidate windows, and the specific operation method is still to calibrate the candidate windows by means of bounding box regression and merge highly overlapped candidate boxes by means of non-maximal value suppression (NMS).
O-Net: the Network structure is shown in fig. 2 (c), which is called Out Network. The image is resized to 48 x 48 and input as input data into the O-Net along with the face candidate box output via the R-Net. The specific operation of the image in an O-Net network is similar to that of an R-Net, except that the values of specific parameters are different and the depth of the convolutional network is increased. O-Net performs similar processing tasks as R-Net, but O-Net describes more face details and outputs 5 face feature point location information.
(3) Non-maximum suppression algorithm (NMS)
The NMS algorithm is used to eliminate multiple highly coincident candidate boxes of the same target object to find the optimal target detection location and boundary. The NMS algorithm process firstly selects a candidate frame with highest confidence in the same target candidate frame in an image, then calculates the IOU value of the candidate frame, and self-defines a threshold value for distinguishing the candidate frame, if the IOU value is larger than the self-defined threshold value, the overlapping degree of the candidate frame and the best candidate frame identified before is large, the candidate frame should not be reserved, the occurrence of repeated cross-target candidate areas is avoided, and then the step is required to be repeated continuously until all the candidate frames are compared, and finally the target detection best area representation is obtained. For all detection tasks, the NMS is an indispensable step, which is a post-processing algorithm for redundancy elimination of the detection results.
First, we need to know an evaluation formula:
Figure BDA0004103947850000071
the IOU value obtained by the formula represents the coincidence degree of a predicted face boundary box obtained finally by a face detection algorithm and a face boundary box which is actually marked manually, the IOU ratio value result directly reflects the accuracy of the face target detection, the DR in the formula represents the face boundary box predicted by the face detection algorithm, and the GT represents the actual face boundary box, which is obtained according to an earlier MTCNN network frame. In an ideal case, the result of the ratio is preferably 1, but in a practical case, there is generally an error, so that the closer the value is to 1, the more accurate the face prediction frame result is represented.
And S3, when the personnel identity information shows that the identity of the current face object is a worker, immediately performing personnel real-time tracking processing, wherein the personnel tracking processing comprises the steps of obtaining personnel entering time and personnel leaving time and calculating intermediate residence time.
And analyzing according to the face information extracted by the face detection module to determine whether the face information is a worker or a stranger, and further carrying out real-time tracking on the specific worker. After the staff is identified, the equipment records the entering time, the leaving time and the middle stay time information of the staff in real time, so that the next identification and analysis are convenient.
Further, when the personnel identity information shows that the identity of the current face object is a non-working person, risk prompt is carried out on strangers.
And when the face detection module detects that the ship cabin is not a ship cabin worker, the equipment carries out risk early warning prompt. The equipment identifies the stranger, immediately carries out risk early warning, prompts the stranger to enter the cabin, has safety hazard, and gives out warning so that staff can report.
And S4, when the middle residence time exceeds a preset working time threshold, feedback of overtime of personnel work is carried out.
And when the equipment detects that the working time exceeds the working time of a general assistant through the analysis of the working time of the staff recorded by the real-time tracking module, immediately performing overtime feedback reporting to prompt the overtime of the working time.
The invention is installed in the camera or video camera of the ship machinery visual angle scope, transmit the image or video perception information gathered to the server of the ship end through the network. The visual analysis unit of ship mechanical image or video perception information is deployed on a server, firstly, the perception information is cleaned, and abnormal parts are removed; and then, the equipment and the equipment identification mark in the perception information are identified by means of a face detection module, and vector operation and maintenance information of the equipment and the equipment identification mark is extracted. The system comprises a 'ship machinery state analysis unit' deployed on a server, wherein the 'real-time tracking module' is used for judging the working state of sensing equipment, and counting and summarizing the accumulated working time of each state of the equipment; and then forming daily safety maintenance and risk analysis of the sensing equipment by means of a risk early warning module in a ship mechanical intelligent operation and maintenance knowledge base, and pushing information according to requirements.
The specific implementation functions are as follows: according to video resources of cameras arranged at the door of the closed cabin, analyzing identity information of people coming and going, and timely identifying whether the people are working personnel or strangers by means of sample data generated by a pre-training model; the specific time and the stay time of the personnel entering and exiting are tracked in time; and (5) timely performing risk early warning on strangers, calculating whether the assistant time of the staff is overtime, and timely feeding back. The workflow is shown in figure 1.
The invention also discloses a ship closed cabin personnel monitoring system based on vision, which is shown in fig. 3 and mainly comprises:
the face monitoring module is used for acquiring a monitoring image of a ship closed cabin personnel, processing the monitoring image to acquire a face object, and extracting features of the face object to acquire face recognition features; inputting the face recognition features into a personal identification model, wherein the personal identification model is used for classifying the face recognition features, outputting a comparison result of the face recognition features and the face features pre-stored in a personal data set, and outputting personal identification information according to the comparison result;
the real-time tracking module is used for immediately carrying out personnel real-time tracking processing when the personnel identity information shows that the identity of the current face object is a worker, wherein the personnel tracking processing comprises the steps of obtaining personnel entering time and personnel leaving time and calculating intermediate residence time;
and the feedback timeout module is used for feeding back the timeout of the personnel work when the middle residence time exceeds a preset work duration threshold value.
Further, the method further comprises the following steps:
and the risk early warning module is used for carrying out risk prompt on strangers when the personnel identity information shows that the identity of the current face object is a non-working person.
For the embodiments of the present invention, since they correspond to those in the above embodiments, the description is relatively simple, and the relevant similarities will be found in the description of the above embodiments, and will not be described in detail herein.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (7)

1. A vision-based method for monitoring personnel in a closed cabin of a ship, comprising:
acquiring a monitoring image of a ship closed cabin personnel, processing the monitoring image to acquire a face object, and extracting features of the face object to acquire face recognition features;
inputting the face recognition features into a personal identification model, wherein the personal identification model is used for classifying the face recognition features, outputting a comparison result of the face recognition features and the face features pre-stored in a personal data set, and outputting personal identification information according to the comparison result;
when the personnel identity information shows that the identity of the current face object is a worker, immediately performing personnel real-time tracking processing, wherein the personnel tracking processing comprises the steps of obtaining personnel entering time and personnel leaving time and calculating intermediate residence time;
and when the middle residence time exceeds a preset working time threshold, performing personnel working overtime feedback.
2. The vision-based ship closed cabin personnel monitoring method according to claim 1, wherein risk prompting is performed on strangers when the personnel identity information shows that the identity of the current face object is a non-staff person.
3. The vision-based ship closed cabin personnel monitoring method according to claim 1, wherein the feature extraction is performed on the face object to obtain face recognition features, and the method comprises the following steps:
performing enhancement processing on the monitoring image by using a multi-scale Retinex face image enhancement algorithm;
and extracting features based on the enhanced face image, wherein the extracted features can represent relevant characteristics of the face, and performing dimension reduction processing on the extracted features to generate face recognition features.
4. A vision-based marine closed cabin personnel monitoring method according to claim 3, wherein the enhancement processing of the monitored image using a multi-scale Retinex face image enhancement algorithm comprises:
acquiring a monitoring image and a current light source image, and obtaining a reflection image according to the monitoring image and the current light source image, wherein the monitoring image is decomposed into a product of the light source image and the reflection image; the light source image is obtained by low-pass filtering the monitoring image.
5. The vision-based marine closed cabin personnel monitoring method of claim 1, wherein the personnel identity recognition model is a multi-task cascade convolutional neural network model, and the multi-task cascade convolutional neural network comprises an input layer, a P-Net sub-network, an R-Net sub-network and an O-Net sub-network; the working process of the multi-task cascade convolutional neural network model comprises the following steps:
performing scale transformation on the initial image through an image pyramid at the input layer;
generating a large number of candidate target area frames through a P-Net sub-network; performing first classification and boundary regression on all target area frames through the R-Net sub-network to exclude most area frames without targets;
and further judging and regressing the rest target area frames by using the O-Net sub-network to output a correct face detection prediction frame.
6. A vision-based marine vessel enclosure personnel monitoring system, comprising:
the face monitoring module is used for acquiring a monitoring image of a ship closed cabin personnel, processing the monitoring image to acquire a face object, and extracting features of the face object to acquire face recognition features; inputting the face recognition features into a personal identification model, wherein the personal identification model is used for classifying the face recognition features, outputting a comparison result of the face recognition features and the face features pre-stored in a personal data set, and outputting personal identification information according to the comparison result;
the real-time tracking module is used for immediately carrying out personnel real-time tracking processing when the personnel identity information shows that the identity of the current face object is a worker, wherein the personnel tracking processing comprises the steps of obtaining personnel entering time and personnel leaving time and calculating intermediate residence time;
and the feedback timeout module is used for feeding back the timeout of the personnel work when the middle residence time exceeds a preset work duration threshold value.
7. The vision-based marine vessel enclosure personnel monitoring system of claim 6, further comprising:
and the risk early warning module is used for carrying out risk prompt on strangers when the personnel identity information shows that the identity of the current face object is a non-working person.
CN202310186103.0A 2023-03-01 2023-03-01 Ship closed cabin personnel monitoring method and system based on vision Pending CN116363578A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310186103.0A CN116363578A (en) 2023-03-01 2023-03-01 Ship closed cabin personnel monitoring method and system based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310186103.0A CN116363578A (en) 2023-03-01 2023-03-01 Ship closed cabin personnel monitoring method and system based on vision

Publications (1)

Publication Number Publication Date
CN116363578A true CN116363578A (en) 2023-06-30

Family

ID=86926644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310186103.0A Pending CN116363578A (en) 2023-03-01 2023-03-01 Ship closed cabin personnel monitoring method and system based on vision

Country Status (1)

Country Link
CN (1) CN116363578A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218783A (en) * 2023-09-12 2023-12-12 广东云百科技有限公司 Internet of things safety management system and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218783A (en) * 2023-09-12 2023-12-12 广东云百科技有限公司 Internet of things safety management system and method

Similar Documents

Publication Publication Date Title
CN112200043B (en) Intelligent danger source identification system and method for outdoor construction site
US20210081698A1 (en) Systems and methods for physical object analysis
CN110419048B (en) System for identifying defined objects
CN109241985B (en) Image identification method and device
US8139817B2 (en) Face image log creation
CN109670441A (en) A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium
CN112381075B (en) Method and system for carrying out face recognition under specific scene of machine room
US9412025B2 (en) Systems and methods to classify moving airplanes in airports
EP1589485A2 (en) Object tracking and eye state identification method
CN112287875B (en) Abnormal license plate recognition method, device, equipment and readable storage medium
CN110555875A (en) Pupil radius detection method and device, computer equipment and storage medium
US20220122360A1 (en) Identification of suspicious individuals during night in public areas using a video brightening network system
CN114898261A (en) Sleep quality assessment method and system based on fusion of video and physiological data
Sosnowski et al. Image processing in thermal cameras
CN116363578A (en) Ship closed cabin personnel monitoring method and system based on vision
CN111259763A (en) Target detection method and device, electronic equipment and readable storage medium
Gal Automatic obstacle detection for USV’s navigation using vision sensors
Tathe et al. Real-time human detection and tracking
CN112101260A (en) Method, device, equipment and storage medium for identifying safety belt of operator
CN110837760B (en) Target detection method, training method and device for target detection
US20230051823A1 (en) Systems, methods, and computer program products for image analysis
CN116452976A (en) Underground coal mine safety detection method
US20220392225A1 (en) Concept for Detecting an Anomaly in Input Data
Chen et al. Real-time instance segmentation of metal screw defects based on deep learning approach
CN114565531A (en) Image restoration method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination