CN114663964A - Ship remote driving behavior state monitoring and early warning method and system and storage medium - Google Patents

Ship remote driving behavior state monitoring and early warning method and system and storage medium Download PDF

Info

Publication number
CN114663964A
CN114663964A CN202210565930.6A CN202210565930A CN114663964A CN 114663964 A CN114663964 A CN 114663964A CN 202210565930 A CN202210565930 A CN 202210565930A CN 114663964 A CN114663964 A CN 114663964A
Authority
CN
China
Prior art keywords
information
crew
facial feature
driving behavior
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210565930.6A
Other languages
Chinese (zh)
Inventor
刘佳仑
李晨
李诗杰
韩良喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202210565930.6A priority Critical patent/CN114663964A/en
Publication of CN114663964A publication Critical patent/CN114663964A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses a monitoring and early warning method, a monitoring and early warning system and a storage medium for a remote driving behavior state of a ship, which can be widely applied to the technical field of ships. According to the invention, the video information of the crew in the ship cockpit is collected in real time, the timeliness of the video information is improved, the target detection algorithm is adopted to construct the facial feature prediction model, the video information collected in real time is input into the facial feature prediction model to obtain the facial feature information, the eye feature information and the head posture information of the crew, meanwhile, the limb position information of the crew is extracted according to the video information, and then the driving state of the crew is analyzed according to the limb position information, the facial feature information, the eye feature information or the head posture information, so that the dependence on the supervision personnel is reduced, and the driving state analysis dimension is improved, thereby effectively improving the accuracy of monitoring the ship driving behavior.

Description

Ship remote driving behavior state monitoring and early warning method and system and storage medium
Technical Field
The invention relates to the technical field of ships, in particular to a method and a system for monitoring and early warning a remote driving behavior state of a ship and a storage medium.
Background
In the related technology, with the development and application of technologies such as ship communication navigation, hull structure design, power propulsion systems and the like, in recent years, marine traffic accidents caused by ship faults are reduced, the driving behaviors and physiological characteristics of seamen become main causes of large-scale marine accidents, and a cockpit video monitoring mode is gradually applied to operation management and accident investigation of ships so as to avoid accidents such as stranding, collision and the like caused by human errors and guarantee navigation safety. However, at present, the monitoring method still depends on subjective evaluation of a supervisor, mechanism description of driving behavior characteristics is lacked, and application bottlenecks such as difficulty in identification of abnormal driving behavior states, small monitoring and early warning coverage range, single judgment dimension and the like exist, so that the accuracy of monitoring the driving behavior of the ship is low.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the invention provides a monitoring and early warning method, a monitoring and early warning system and a storage medium for the remote driving behavior state of a ship, which can effectively improve the monitoring accuracy of the driving behavior of the ship.
On one hand, the embodiment of the invention provides a monitoring and early warning method for a remote driving behavior state of a ship, which comprises the following steps:
acquiring video information of a crew in a ship cockpit in real time;
extracting the limb position information of the crew according to the video information;
adopting a target detection algorithm to construct a facial feature prediction model;
inputting the video information into the facial feature prediction model to obtain facial feature information, eye feature information and head posture information of the crew;
analyzing the driving state of the crew according to the limb position information, the facial feature information, the eye feature information or the head posture information;
and determining that the driving state comprises an abnormal driving behavior state, and generating early warning information.
In some embodiments, the constructing the facial feature prediction model using the object detection algorithm includes:
converting an FC6 layer and an FC7 layer in the VGG network architecture into convolutional layers, deleting all Dropout layers and FC8 layers in the VGG network architecture, and adding a Conv6 convolutional layer, a Conv7 convolutional layer, a Conv8 convolutional layer and a Conv9 convolutional layer to obtain an SDD neural network model;
acquiring an effective characteristic layer of a preset convolution through a backbone network of the SDD neural network model;
processing each effective characteristic layer to obtain the change information of all prediction frames on each grid point and the corresponding types of all predictions;
and according to the center, the width and the height of the prediction frames, performing score sorting and non-maximum inhibition screening on each prediction frame to obtain a facial feature prediction model.
In some embodiments, the facial feature prediction model is adjusted by a model loss function comprising a confidence loss function and a localization loss function;
the confidence loss function is used for representing the classification loss of the facial feature information, the eye feature information and the head posture information;
the localization loss function is used to represent a transformation loss between the target box and the prediction box.
In some embodiments, prior to said inputting said video information into said facial feature prediction model, said method further comprises the steps of:
extracting real-time images within the video;
reducing local shadow and illumination change of the real-time image by a compression method, and normalizing the real-time image;
traversing the normalized real-time image, calculating the pixel values of each pixel point on the normalized real-time image in the horizontal direction and the vertical direction, and calculating the gradient value and the gradient direction of the pixel point according to the pixel values.
In some embodiments, the inputting the video information into the facial feature prediction model to obtain facial feature information of the crew member includes:
inputting the normalized real-time image into the facial feature prediction model;
dividing the input real-time image into a plurality of cells, and dividing the real-time image into a plurality of direction intervals from 0 degree to 360 degrees;
counting the gradient direction of the pixels in each cell, and performing weighted projection in the direction interval according to the gradient values to obtain a gradient direction histogram in the cell;
forming a unit block by a plurality of adjacent unit grids, and combining the gradient direction histograms of all the unit grids in the unit block to obtain the characteristic vector of the gradient direction histogram of the unit block;
traversing the real-time image by sliding unit blocks according to a preset step length, and combining the characteristic vectors of the gradient direction histograms of all the unit blocks to obtain the characteristic vector of the gradient direction histogram of the real-time image;
extracting the position coordinates of the positive human face of the real-time image according to the gradient direction histogram feature vector of the real-time image;
and predicting to obtain the facial feature information of the crew according to the forward face position coordinates and a regression tree algorithm training model.
In some embodiments, the analyzing the driving state of the crew according to the eye feature information includes:
determining the left eye coordinate and the right eye coordinate of the crew according to the eye feature information;
calculating the EAR value of the left eye of the crew according to the left eye coordinate, and calculating the EAR value of the right eye of the crew according to the right eye coordinate;
comparing the left eye EAR value and the right eye EAR value to a minimum eye aspect ratio threshold, respectively, or comparing the left eye EAR value and the right eye EAR value to an eye fatigue threshold, respectively;
and determining the fatigue driving state of the crew according to the comparison result.
In some embodiments, said analyzing the driving state of said crew from said head pose information comprises:
calculating a first Euclidean distance from the nose of the crew member to the left and right face boundaries according to the head posture information, calculating a second Euclidean distance from the eyebrow of the crew member to the left and right face boundaries according to the head posture information, and calculating a third Euclidean distance between the left and right face sides of the crew member according to the head posture information;
determining the head action of the crew according to the first Euclidean distance, the second Euclidean distance and the third Euclidean distance;
determining a fatigue driving state of the crew according to the head action.
In some embodiments, said analyzing the driving state of the crew from the limb position information comprises:
calculating the distance and the angle between the upper limbs of the crew and the rudder according to the limb position information;
acquiring object information on the hand of the crew;
and analyzing the safe driving state of the shipman according to the distance and the angle between the upper limbs of the shipman and the rudder and the object information.
On the other hand, the embodiment of the invention provides a monitoring and early warning system for a remote driving behavior state of a ship, which comprises the following components:
at least one memory for storing a program;
and the at least one processor is used for loading the program to execute the monitoring and early warning method for the remote driving behavior state of the ship.
In another aspect, an embodiment of the present invention provides a storage medium, in which a computer-executable program is stored, and the computer-executable program is used for implementing the ship remote driving behavior state monitoring and early warning method when being executed by a processor.
The embodiment of the invention provides a monitoring and early warning method for the remote driving behavior state of a ship, which has the following beneficial effects:
according to the embodiment, the video information of the crew in the ship cockpit is collected in real time, the timeliness of the video information is improved, the target detection algorithm is adopted to construct the facial feature prediction model, the video information collected in real time is input into the facial feature prediction model, the facial feature information, the eye feature information and the head posture information of the crew are obtained, meanwhile, the limb position information of the crew is extracted according to the video information, and then the driving state of the crew is analyzed according to the limb position information, the facial feature information, the eye feature information or the head posture information, so that the dependence on supervision personnel is reduced, the driving state analysis dimensionality is improved, and the accuracy of monitoring of the ship driving behavior is effectively improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The invention is further described with reference to the following figures and examples, in which:
fig. 1 is a flowchart of a monitoring and early warning method for a remote driving behavior state of a ship according to an embodiment of the present invention;
fig. 2 is a schematic diagram of coordinates of a single eye according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention and are not to be construed as limiting the present invention.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality is one or more, the meaning of a plurality is two or more, and the above, below, exceeding, etc. are understood as excluding the present numbers, and the above, below, within, etc. are understood as including the present numbers. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
In the description of the present invention, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples," etc., means that a particular feature or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Referring to fig. 1, an embodiment of the present invention provides a monitoring and early warning method for a remote driving behavior state of a ship, and the embodiment may be applied to a server, a cloud end, or a background controller of a remote monitoring platform of a ship.
Taking a background controller applied to a remote monitoring platform of a ship as an example, as shown in fig. 1, the method of the embodiment includes, but is not limited to, the following steps:
and 110, acquiring video information of crews in the ship cockpit in real time.
In the embodiment of the application, a shipborne monitoring unit can be arranged at a ship end in advance, and the video information of the driving behavior of a crew in a ship cockpit is collected in real time through the shipborne monitoring unit. After the real-time video information is acquired, the video information can be transmitted to the background controller after being subjected to variable-pressure compression, so that the channel occupation amount of the information is reduced.
And 120, extracting the limb position information of the crew according to the video information.
In the embodiment of the application, after the background controller obtains the video information, the real-time image in the video information is extracted, information such as the position of the limbs and the position of the rudder of a shipman is extracted from the image, and meanwhile, object information on the hands of the current driving shipman can be extracted and obtained.
And step 130, constructing a facial feature prediction model by adopting a target detection algorithm.
In the embodiment of the present application, the facial feature prediction model may be constructed by, but does not include, the following steps:
converting an FC6 layer and an FC7 layer in a VGG (convolutional neural network) network architecture into convolutional layers based on a VGG (convolutional neural network) pre-training model, deleting all Dropout layers and FC8 layers in the VGG network architecture, and adding a Conv6 convolutional layer, a Conv7 convolutional layer, a Conv8 convolutional layer and a Conv9 convolutional layer on the basis to obtain an SDD (convolutional neural network) model; the VGG pre-training model is a common SSD (target detection algorithm) basic model, and convolution layers are added on the basis of VGG-16 in the field of target detection to obtain more feature maps for detection. FC6 refers to the sixth layer of fully-connected layer, FC7 refers to the seventh layer of fully-connected layer, and so on, FCn refers to the nth layer of fully-connected layer. Conv6 refers to the sixth convolutional layer, Conv7 refers to the seventh convolutional layer, and so on, and Convn refers to the nth convolutional layer. Then, obtaining an effective feature layer of a preset convolution through a backbone network of the SDD neural network model, for example, obtaining six effective feature layers such as a third convolution of conv4, an fc7 convolution, a second convolution of conv6, a second convolution of conv7, a second convolution of conv8, and a second convolution of conv9 by using an SSD backbone network; processing each effective characteristic layer to obtain the change information of all prediction frames on each grid point and the corresponding types of all predictions; and then, according to the center, the width and the height of the prediction frames, performing score sorting and non-maximum inhibition screening on each prediction frame to obtain a facial feature prediction model.
In the embodiment of the present application, a classification problem of SSD target detection and a regression problem of frames are considered comprehensively, and the embodiment adjusts the facial feature prediction model through a model loss function, where the model loss function includes a confidence loss function and a localization loss function; the confidence loss function is used for representing the classification loss of the facial feature information, the eye feature information and the head posture information; the localization loss function is used to represent a transformation loss between the target box and the prediction box.
Specifically, the model loss function is shown in equation (1):
Figure 885724DEST_PATH_IMAGE001
formula (1)
Wherein the content of the first and second substances,
Figure 735999DEST_PATH_IMAGE002
representing a model loss function;
Figure 748954DEST_PATH_IMAGE003
representing a confidence loss of the regression problem;
Figure 710088DEST_PATH_IMAGE004
a loss of localization representing a classification problem; n represents the number of positive samples matched to the prior box;
Figure 328151DEST_PATH_IMAGE005
representing a proportionality coefficient between confidence loss and positioning loss; x represents a matching variable; c represents a confidence degree predicted value; l represents the predicted value of the position of the boundary box corresponding to the prior frame; g denotes the location parameter of the true frame group channel.
The localization loss function is shown in formula (2), formula (3), formula (4), and formula (5):
Figure 801858DEST_PATH_IMAGE006
formula (2)
Figure 521684DEST_PATH_IMAGE007
Formula (3)
Figure 473459DEST_PATH_IMAGE008
Formula (4)
Figure 273925DEST_PATH_IMAGE009
Formula (5)
Figure 669265DEST_PATH_IMAGE010
Formula (6)
Wherein the content of the first and second substances,
Figure 125654DEST_PATH_IMAGE011
representing a localization loss function; pos represents the positive sample, (cx, cy) is the center of the default box d after compensation, (w, h) is the width and height of the default box;
Figure 881121DEST_PATH_IMAGE012
indicating whether the ith prediction box and the jth real box are matched with respect to the category k;
Figure 473776DEST_PATH_IMAGE013
representing an observation function between the prediction box and the target box;
Figure 774439DEST_PATH_IMAGE014
representing a prediction box;
Figure 718124DEST_PATH_IMAGE015
representing a real box.
The confidence loss function is shown in equation (7):
Figure 277281DEST_PATH_IMAGE016
formula (7)
Wherein the content of the first and second substances,
Figure 990022DEST_PATH_IMAGE017
representing a confidence loss of the regression process; pos represents a positive sample; neg denotes negative sample; p represents a category number; i represents the ith primer box; j represents the jth group treth box;
Figure 461586DEST_PATH_IMAGE018
indicating parameters from the ith primer box to the jth group route box in the pth category GT box;
Figure 892567DEST_PATH_IMAGE019
representing the prediction probability of the ith search box corresponding to the category p;
Figure 786574DEST_PATH_IMAGE020
representing a prediction probability that the category is a background;
Figure 370133DEST_PATH_IMAGE021
and represents the prediction probability of the ith search box prior box corresponding to the category p.
And 140, inputting the video information into the facial feature prediction model to obtain the facial feature information, the eye feature information and the head posture information of the crew.
In the embodiment of the application, before inputting video information into a facial feature prediction model, extracting real-time images in the video; and reducing the local shadow and illumination change of the real-time image by a Gamma compression method, and normalizing the real-time image to reduce the influence of color data and illumination factors. Wherein, the processing procedure of this step is represented by formula (8):
Figure 261866DEST_PATH_IMAGE022
formula (8)
Wherein, formula (8) represents that the real image brightness is obtained through Gamma correction.
Then, traversing the normalized real-time image, and calculating each pixel point on the normalized real-time image through a formula (9) and a formula (10)
Figure 180143DEST_PATH_IMAGE023
In the horizontal direction
Figure 81103DEST_PATH_IMAGE024
And in the vertical direction
Figure 253590DEST_PATH_IMAGE025
The pixel of (a) is gradually changed, wherein,
Figure 581803DEST_PATH_IMAGE026
indicating points
Figure 721797DEST_PATH_IMAGE027
Pixel value of (a):
Figure 426448DEST_PATH_IMAGE028
formula (9)
Figure 453441DEST_PATH_IMAGE029
Formula (10)
And calculating gradient values and gradient directions of the pixel points according to the pixel values by formula (11) and formula (12):
Figure 952555DEST_PATH_IMAGE030
formula (11)
Figure 579846DEST_PATH_IMAGE031
Formula (12)
Wherein the content of the first and second substances,
Figure 822608DEST_PATH_IMAGE032
the value of the gradient is represented by,
Figure 969687DEST_PATH_IMAGE033
the gradient direction is indicated.
After the gradient value and the gradient direction are obtained, facial feature information of the crew is extracted through a facial feature prediction model. Specifically, the implementation process includes, but is not limited to, the following steps:
dividing the input normalized real-time image into a plurality of cells, wherein each cell comprises N pixels by N, dividing the real-time image into a plurality of direction intervals from 0 degrees to 360 degrees, such as N song direction intervals, counting the gradient direction of the pixels in each cell, and performing weighted projection in the direction intervals according to the gradient values to obtain a gradient direction histogram in the cell;
forming a cell block by a plurality of adjacent cells, for example, forming adjacent K cells into a cell block, and combining gradient direction Histograms (HOG) of all the cells in the cell block to obtain gradient direction histogram feature vectors of the cell block;
sliding the unit blocks to traverse the real-time image according to a preset step length L, combining the characteristic vectors of the gradient direction histograms of all the unit blocks to obtain the characteristic vector of the gradient direction histogram of the real-time image, and describing the whole image through the characteristic vector of the gradient direction histogram;
and extracting the forward face position coordinates of the real-time image according to the gradient direction histogram feature vector of the real-time image, predicting facial state feature points according to the forward face position coordinates and a regression tree algorithm training model, and realizing the positioning of 68 key feature points of the face so as to obtain the facial feature information of the crew.
Step 150, analyzing the driving state of the crew according to the limb position information, the facial feature information, the eye feature information or the head posture information;
in the embodiment of the present application, the driving state of the crew is analyzed according to the eye feature information, and the driving state can be processed through the following steps:
determining the left eye coordinate and the right eye coordinate of the crew according to the eye feature information;
calculating the left eye EAR value of the crew according to the left eye coordinate, and calculating the right eye EAR value of the crew according to the right eye coordinate;
comparing the left eye EAR value and the right eye EAR value to a minimum eye aspect ratio threshold, respectively, or comparing the left eye EAR value and the right eye EAR value to an eye fatigue threshold, respectively;
and determining the fatigue driving state of the crew according to the comparison result.
Specifically, as shown in fig. 2, each eye is represented by specific 6 coordinate points, and the EAR value of a single eye can be calculated by equation (12):
Figure 639703DEST_PATH_IMAGE034
formula (12)
Wherein the content of the first and second substances,
Figure 754289DEST_PATH_IMAGE035
Figure 535163DEST_PATH_IMAGE036
Figure 802328DEST_PATH_IMAGE037
Figure 643245DEST_PATH_IMAGE038
Figure 41865DEST_PATH_IMAGE039
and
Figure 908321DEST_PATH_IMAGE040
each representing 6 coordinate points of the eye.
After the left eye EAR value and the right eye EAR value of the crew are obtained, the left eye EAR value and the right eye EAR value may be compared to a minimum eye aspect ratio threshold. For example, less than a threshold, it is determined that the crew is likely to be sleeping. Or, the left eye EAR value and the right eye EAR value are compared to an eye fatigue threshold. For example, less than the eye fatigue threshold, the crew is determined to be tired driving. The threshold value can be set according to actual conditions.
In an embodiment of the present application, the analyzing the driving state of the crew according to the head pose information may be performed by:
calculating a first Euclidean distance from the nose of the crew to the boundaries of the left face and the right face according to the head posture information, calculating a second Euclidean distance from the eyebrow of the crew to the boundaries of the left face and the right face according to the head posture information, and calculating a third Euclidean distance between the sides of the left face and the right face of the crew according to the head posture information; determining the head action of the crew according to the first Euclidean distance, the second Euclidean distance and the third Euclidean distance; determining a fatigue driving state of the crew according to the head action. For example, when the nodding motion when the crew has drowsiness is determined from the head motion, it can be determined that the crew is in a fatigue driving state.
In an embodiment of the present application, the analyzing the driving state of the crew according to the limb position information may be performed by:
calculating the distance and the angle between the upper limbs of the crew and a rudder according to the limb position information, and acquiring object information on the hands of the crew; and then analyzing the safe driving state of the shipman according to the distance and the angle between the upper limbs of the shipman and the rudder and the object information. For example, when the distance and the angle between the upper limbs of the crew and the rudder are not within the range of normal driving, it can be determined that the crew is in an abnormal driving state; for another example, when the crew eats things, plays a mobile phone, and makes a call, it can be determined that the crew is in an abnormal driving state.
And step 160, determining that the driving state comprises an abnormal driving behavior state, and generating early warning information.
In the embodiment of the application, when determining that a crew is in fatigue driving or abnormal driving, the background controller of the ship remote monitoring platform can generate early warning information to prompt a manager to remotely prompt or remotely intervene in the driving process of controlling the ship, so that the running safety of the ship is improved.
In the embodiment of the application, after the background controller of the ship remote monitoring platform receives the embodiment data of the ship, the background controller can also perform digital management, analysis and visual display on the real-time data, so that managers can call and check the operation data of the ship conveniently.
In summary, the embodiment of the application realizes the identification of various driving behaviors such as blinking, eye closing, head shaking, head nodding, mobile phone playing, phone answering and making, improves the detection efficiency of abnormal driving behaviors such as fatigue, can effectively realize the quantitative evaluation of the behavior state of the driver, improves the adaptability of the crew, reduces the control error rate, and provides theoretical guidance and technical support for accident investigation, maritime supervision and crew training.
The embodiment of the invention provides a monitoring and early warning system for a remote driving behavior state of a ship, which comprises:
at least one memory for storing a program;
and the at least one processor is used for loading the program to execute the ship remote driving behavior state monitoring and early warning method shown in the figure 1.
The content of the embodiment of the method of the invention is all applicable to the embodiment of the system, the function of the embodiment of the system is the same as the embodiment of the method, and the beneficial effect achieved by the embodiment of the system is the same as the beneficial effect achieved by the method.
An embodiment of the present invention provides a storage medium, in which a computer-executable program is stored, and the computer-executable program is executed by a processor to implement the ship remote driving behavior state monitoring and early warning method shown in fig. 1.
Embodiments of the present invention also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device may read the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the ship remote driving behavior state monitoring and early warning method shown in fig. 1.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention. Furthermore, the embodiments of the present invention and the features of the embodiments may be combined with each other without conflict.

Claims (10)

1. A monitoring and early warning method for a remote driving behavior state of a ship is characterized by comprising the following steps:
acquiring video information of a crew in a ship cockpit in real time;
extracting the limb position information of the crew according to the video information;
adopting a target detection algorithm to construct a facial feature prediction model;
inputting the video information into the facial feature prediction model to obtain facial feature information, eye feature information and head posture information of the crew;
analyzing the driving state of the crew according to the limb position information, the facial feature information, the eye feature information or the head posture information;
and determining that the driving state comprises an abnormal driving behavior state, and generating early warning information.
2. The method for monitoring and early warning the remote driving behavior state of the ship according to claim 1, wherein the constructing of the facial feature prediction model by adopting the target detection algorithm comprises:
converting an FC6 layer and an FC7 layer in the VGG network architecture into convolutional layers, deleting all Dropout layers and FC8 layers in the VGG network architecture, and adding a Conv6 convolutional layer, a Conv7 convolutional layer, a Conv8 convolutional layer and a Conv9 convolutional layer to obtain an SDD neural network model;
acquiring an effective characteristic layer of a preset convolution through a backbone network of the SDD neural network model;
processing each effective characteristic layer to obtain the change information of all the prediction frames on each grid point and the types corresponding to all the predictions;
and according to the center, the width and the height of the prediction frames, performing score sorting and non-maximum inhibition screening on each prediction frame to obtain a facial feature prediction model.
3. The vessel remote driving behavior state monitoring and early warning method according to claim 2, wherein the facial feature prediction model is adjusted by a model loss function, wherein the model loss function comprises a confidence loss function and a positioning loss function;
the confidence loss function is used for representing classification losses of the facial feature information, the eye feature information and the head posture information;
the localization loss function is used to represent a transformation loss between the target box and the prediction box.
4. The vessel remote driving behavior state monitoring and early warning method as claimed in claim 1, wherein before the inputting the video information into the facial feature prediction model, the method further comprises the following steps:
extracting real-time images within the video;
reducing local shadow and illumination change of the real-time image by a compression method, and normalizing the real-time image;
traversing the normalized real-time image, calculating the pixel values of each pixel point on the normalized real-time image in the horizontal direction and the vertical direction, and calculating the gradient value and the gradient direction of the pixel point according to the pixel values.
5. The method for monitoring and warning the remote driving behavior state of the ship according to claim 4, wherein the step of inputting the video information into the facial feature prediction model to obtain the facial feature information of the crew comprises:
inputting the normalized real-time image into the facial feature prediction model;
dividing the input real-time image into a plurality of cells, and dividing 0-360 degrees into a plurality of direction intervals;
counting the gradient direction of the pixels in each cell, and performing weighted projection in the direction interval according to the gradient value to obtain a gradient direction histogram in the cell;
forming a unit block by a plurality of adjacent unit cells, and combining the gradient direction histograms of all the unit cells in the unit block to obtain the characteristic vector of the gradient direction histogram of the unit block;
traversing the real-time image according to a preset step length sliding unit block, and combining the gradient direction histogram feature vectors of all the unit blocks to obtain the gradient direction histogram feature vector of the real-time image;
extracting the position coordinates of the positive human face of the real-time image according to the gradient direction histogram feature vector of the real-time image;
and predicting to obtain the facial feature information of the crew according to the position coordinates of the forward human face and a regression tree algorithm training model.
6. The method for monitoring and warning the remote driving behavior state of the ship according to claim 1, wherein the analyzing the driving state of the crew according to the eye feature information comprises:
determining the left eye coordinate and the right eye coordinate of the crew according to the eye feature information;
calculating the left eye EAR value of the crew according to the left eye coordinate, and calculating the right eye EAR value of the crew according to the right eye coordinate;
comparing the left eye EAR value and the right eye EAR value to a minimum eye aspect ratio threshold, respectively, or comparing the left eye EAR value and the right eye EAR value to an eye fatigue threshold, respectively;
and determining the fatigue driving state of the crew according to the comparison result.
7. The vessel remote driving behavior state monitoring and early warning method as claimed in claim 1, wherein the analyzing the driving state of the crew according to the head attitude information comprises:
calculating a first Euclidean distance from the nose of the crew member to the left and right face boundaries according to the head posture information, calculating a second Euclidean distance from the eyebrow of the crew member to the left and right face boundaries according to the head posture information, and calculating a third Euclidean distance between the left and right face sides of the crew member according to the head posture information;
determining the head action of the crew according to the first Euclidean distance, the second Euclidean distance and the third Euclidean distance;
determining a fatigue driving state of the crew according to the head action.
8. The method for monitoring and warning the state of the remote driving behavior of the ship according to claim 1, wherein the analyzing the driving state of the crew according to the limb position information comprises:
calculating the distance and the angle between the upper limbs of the crew and the rudder according to the limb position information;
acquiring object information on the hand of the crew;
and analyzing the safe driving state of the shipman according to the distance and the angle between the upper limbs of the shipman and the rudder and the object information.
9. The utility model provides a long-range driving behavior state monitoring early warning system of boats and ships which characterized in that includes:
at least one memory for storing a program;
at least one processor for loading the program to execute the vessel remote driving behavior state monitoring and early warning method according to any one of claims 1 to 8.
10. A storage medium having stored therein a computer-executable program for implementing a vessel remote driving behavior state monitoring and warning method according to any one of claims 1 to 8 when the computer-executable program is executed by a processor.
CN202210565930.6A 2022-05-24 2022-05-24 Ship remote driving behavior state monitoring and early warning method and system and storage medium Pending CN114663964A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210565930.6A CN114663964A (en) 2022-05-24 2022-05-24 Ship remote driving behavior state monitoring and early warning method and system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210565930.6A CN114663964A (en) 2022-05-24 2022-05-24 Ship remote driving behavior state monitoring and early warning method and system and storage medium

Publications (1)

Publication Number Publication Date
CN114663964A true CN114663964A (en) 2022-06-24

Family

ID=82037679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210565930.6A Pending CN114663964A (en) 2022-05-24 2022-05-24 Ship remote driving behavior state monitoring and early warning method and system and storage medium

Country Status (1)

Country Link
CN (1) CN114663964A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115798019A (en) * 2023-01-06 2023-03-14 山东星科智能科技股份有限公司 Intelligent early warning method for practical training driving platform based on computer vision

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306293A (en) * 2011-07-29 2012-01-04 南京多伦科技有限公司 Method for judging driver exam in actual road based on facial image identification technology
CN106295474A (en) * 2015-05-28 2017-01-04 交通运输部水运科学研究院 The fatigue detection method of deck officer, system and server
US10482333B1 (en) * 2017-01-04 2019-11-19 Affectiva, Inc. Mental state analysis using blink rate within vehicles
CN111709274A (en) * 2020-04-27 2020-09-25 上海九御智能科技有限公司 Driver head-lowering behavior detection algorithm applicable to embedded system
CN112241658A (en) * 2019-07-17 2021-01-19 青岛大学 Fatigue driving early warning system and method based on depth camera
CN113743279A (en) * 2021-08-30 2021-12-03 山东大学 Ship pilot state monitoring method, system, storage medium and equipment
CN113989789A (en) * 2021-11-16 2022-01-28 中国人民武装警察部队工程大学 Face fatigue detection method based on multi-feature fusion in teaching scene
CN114612885A (en) * 2022-03-11 2022-06-10 广州翰南工程技术有限公司 Driver fatigue state detection method based on computer vision

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306293A (en) * 2011-07-29 2012-01-04 南京多伦科技有限公司 Method for judging driver exam in actual road based on facial image identification technology
CN106295474A (en) * 2015-05-28 2017-01-04 交通运输部水运科学研究院 The fatigue detection method of deck officer, system and server
US10482333B1 (en) * 2017-01-04 2019-11-19 Affectiva, Inc. Mental state analysis using blink rate within vehicles
CN112241658A (en) * 2019-07-17 2021-01-19 青岛大学 Fatigue driving early warning system and method based on depth camera
CN111709274A (en) * 2020-04-27 2020-09-25 上海九御智能科技有限公司 Driver head-lowering behavior detection algorithm applicable to embedded system
CN113743279A (en) * 2021-08-30 2021-12-03 山东大学 Ship pilot state monitoring method, system, storage medium and equipment
CN113989789A (en) * 2021-11-16 2022-01-28 中国人民武装警察部队工程大学 Face fatigue detection method based on multi-feature fusion in teaching scene
CN114612885A (en) * 2022-03-11 2022-06-10 广州翰南工程技术有限公司 Driver fatigue state detection method based on computer vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
武玉伟等: "《深度学习基础与应用》", 30 November 2020, 北京理工大学出版社 *
王文峰等: "《MATLAB计算机视觉与机器认知》", 31 August 2017, 北京航空航天大学出版社 *
马佳磊: "基于面部多重信息的驾驶人状态检测研究", 《中国优秀博硕士学位论文全文数据库(博士) 工程科技Ⅰ辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115798019A (en) * 2023-01-06 2023-03-14 山东星科智能科技股份有限公司 Intelligent early warning method for practical training driving platform based on computer vision

Similar Documents

Publication Publication Date Title
CN109725310B (en) Ship positioning supervision system based on YOLO algorithm and shore-based radar system
CN108806334A (en) A kind of intelligent ship personal identification method based on image
CN108230291B (en) Object recognition system training method, object recognition method, device and electronic equipment
CN112100917A (en) Intelligent ship collision avoidance simulation test system and method based on expert confrontation system
CN113139594B (en) Self-adaptive detection method for airborne image unmanned aerial vehicle target
CN113744262B (en) Target segmentation detection method based on GAN and YOLO-v5
CN113393707A (en) Ship monitoring method, system, equipment and storage medium based on photoelectric linkage
CN115331172A (en) Workshop dangerous behavior recognition alarm method and system based on monitoring video
CN111861155A (en) Ship collision risk detection method, system, computer device and storage medium
CN111163290A (en) Device and method for detecting and tracking night navigation ship
CN114663964A (en) Ship remote driving behavior state monitoring and early warning method and system and storage medium
CN111027445A (en) Target identification method for marine ship
CN115880562A (en) Lightweight target detection network based on improved YOLOv5
CN113436125B (en) Side-scan sonar simulation image generation method, device and equipment based on style migration
CN116384597B (en) Dynamic prediction method and system for port entering and exiting of fishing port ship based on geographic information system
CN116310601B (en) Ship behavior classification method based on AIS track diagram and camera diagram group
CN117372928A (en) Video target detection method and device and related equipment
CN115331162A (en) Cross-scale infrared pedestrian detection method, system, medium, equipment and terminal
CN113191266B (en) Remote monitoring management method and system for ship power device
CN115861595A (en) Multi-scale domain self-adaptive heterogeneous image matching method based on deep learning
CN115187936A (en) Monitoring system and method for preventing offshore platform from climbing
CN114581769A (en) Method for identifying houses under construction based on unsupervised clustering
CN114821493A (en) Ship information display method and system based on computer vision, AIS and radar
CN115014348A (en) Method and device for measuring ship navigation speed and computer equipment
CN116486324B (en) Subway seat trampling behavior detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220624