CN114782897A - Dangerous behavior detection method and system based on machine vision and deep learning - Google Patents

Dangerous behavior detection method and system based on machine vision and deep learning Download PDF

Info

Publication number
CN114782897A
CN114782897A CN202210497047.8A CN202210497047A CN114782897A CN 114782897 A CN114782897 A CN 114782897A CN 202210497047 A CN202210497047 A CN 202210497047A CN 114782897 A CN114782897 A CN 114782897A
Authority
CN
China
Prior art keywords
information
dangerous
acquiring
area
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210497047.8A
Other languages
Chinese (zh)
Inventor
任剑岚
姜如霞
刘冰洁
唐雅雯
王金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Vocational and Technical College of Communication
Original Assignee
Jiangxi Vocational and Technical College of Communication
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Vocational and Technical College of Communication filed Critical Jiangxi Vocational and Technical College of Communication
Priority to CN202210497047.8A priority Critical patent/CN114782897A/en
Publication of CN114782897A publication Critical patent/CN114782897A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a dangerous behavior detection method and a system for machine vision and deep learning, wherein the method comprises the following steps: acquiring monitoring video stream information of a target area, and acquiring frame image information according to the monitoring video stream information for preprocessing; acquiring an interested area of the preprocessed frame image information, extracting features to identify dangerous goods, and determining an interactive relation between the dangerous goods and a target object; constructing a dangerous behavior detection model to detect dangerous behaviors in a target area; judging the dangerous behavior grade of the detection object according to the detection result, and generating early warning information; the method comprises the steps of defining a dangerous area in a target area, obtaining environmental audio information when obtaining monitoring video stream information of the dangerous area, obtaining sensitive word information according to the environmental audio information, and carrying out early warning according to the sensitive word information. The invention can monitor and identify dangerous behaviors in the target area, ensures the identification precision and simultaneously generates early warning information, and realizes the information sharing of the dangerous information in the target area.

Description

Dangerous behavior detection method and system based on machine vision and deep learning
Technical Field
The invention relates to the technical field of behavior detection, in particular to a dangerous behavior detection method and system based on machine vision and deep learning.
Background
Video surveillance has wide application in the maintenance of safety order of public buildings. Traditional video monitoring mainly relies on manual operation, therefore often can't obtain timely processing when taking place emergency, along with artificial intelligence's rapid development, has good application prospect in the building security protection field. The research on dangerous behavior detection of personnel is a research hotspot in the field of computer vision, is also a key point for upgrading video monitoring to intelligent monitoring, and is concerned in the research field of building security. However, the traditional detection method needs manual feature design, has low detection accuracy and long time consumption, has the problems of defects in behavior feature extraction, inaccurate and insufficient feature extraction and the like, and restricts the effective application of behavior detection and identification, and the behavior detection method based on deep learning has high-efficiency calculation efficiency and excellent model generalization capability.
In order to rapidly analyze and detect dangerous behaviors of a detection object in a target area, a system needs to be developed to be matched with the system for realization, the system acquires frame image information according to monitoring video stream information by acquiring the monitoring video stream information of the target area, carries out preprocessing and feature extraction to identify dangerous goods, constructs a dangerous behavior detection model based on a convolutional neural network model, and detects the dangerous behaviors in the target area through the extracted features; judging the dangerous behavior grade of the detected object according to the detection result, and generating corresponding early warning information; meanwhile, a dangerous area outline in the target area is defined, environmental audio information is obtained when monitoring video stream information of the dangerous area is obtained, sensitive words are obtained according to the environmental audio information, and early warning is carried out according to the sensitive words. How to construct a dangerous behavior detection model and accurately identify behavior characteristics in the implementation process of the system is an urgent problem which needs to be solved.
Disclosure of Invention
In order to solve the technical problems, the invention provides a dangerous behavior detection method and system based on machine vision and deep learning.
The invention provides a dangerous behavior detection method based on machine vision and deep learning, which comprises the following steps:
acquiring monitoring video stream information of a target area, acquiring frame image information according to the monitoring video stream information, and preprocessing the frame image information;
acquiring an interested area of the preprocessed frame image information, extracting features to identify dangerous goods, and determining an interactive relation between the dangerous goods and a target object;
constructing a dangerous behavior detection model based on a convolutional neural network model, and detecting dangerous behaviors in a target area according to the dangerous behavior detection model;
judging the dangerous behavior grade of a detected object according to the detection result of the dangerous behavior detection model, and generating early warning information according to the dangerous behavior grade;
meanwhile, a dangerous area in the target area is defined, environmental audio information is obtained when monitoring video stream information of the dangerous area is obtained, sensitive word information is obtained according to the environmental audio information, and early warning is carried out according to the sensitive word information.
In this scheme, the acquiring of the surveillance video stream information of the target area, acquiring frame image information according to the surveillance video stream information, and preprocessing the frame image information specifically include:
dividing monitoring video stream information of a target area, identifying frame image information containing portrait information, and taking the frame image information containing portrait as key frame image information;
ashing and denoising the key frame image information, filtering a background image based on an edge algorithm, and acquiring an interested area in the key frame image information;
and acquiring an optical flow vector in the key frame image information to generate optical flow characteristics.
In the scheme, the dangerous goods identification is carried out, and the interactive relation between the dangerous goods and the target object is determined, specifically:
acquiring to identify dangerous goods based on a FasterRCNN model, acquiring feature map information of an interest region in frame image information through VGG16, convolving the feature map information through an RPN (resilient packet network), generating suggested regions with different scales, and performing anchor frame regression;
performing pooling operation on the proposed area, judging whether dangerous goods are contained or not through a full connection layer and a Softmax classifier, acquiring category information of the dangerous goods, and acquiring position offset of an anchor frame through anchor frame regression again to generate an accurate dangerous goods target anchor frame;
when the dangerous goods are detected, acquiring position information of key points of hands of the target object in the frame image information, and according to the position information of the target anchor frame of the dangerous goods and the distance information of the key points of the hands of the target object;
when the distance information is smaller than a preset distance threshold value, judging that dangerous goods holding personnel exist in the target area and generating early warning information;
and when the distance information is greater than a preset distance threshold, marking the target object, and monitoring behavior information of the marked object in a key manner.
In this scheme, the method for detecting the dangerous behavior in the target area based on the convolutional neural network model includes the steps of constructing a dangerous behavior detection model based on the convolutional neural network model, and detecting the dangerous behavior in the target area according to the extracted features of the dangerous behavior detection model, specifically:
establishing a dangerous behavior detection model, setting initialization parameters, and training the dangerous behavior detection model through a dangerous behavior data set;
segmenting the monitoring video stream information of a target area, acquiring image characteristics and optical flow characteristics of frame image information in each segment of video stream information through sparse sampling, and taking the image characteristics and the optical flow characteristics as local characteristics of two characteristic categories;
aggregating various local features to obtain global features, converting the global features into classified feature vectors through multi-scale feature fusion, and obtaining dangerous behavior recognition results under corresponding feature categories according to the classified feature vectors;
and weighting the dangerous behavior recognition results corresponding to the local features of the two feature categories according to a preset weight to generate a final dangerous behavior prediction result.
In this scheme, the determining a dangerous behavior level of a detection object according to a detection result of the dangerous behavior detection model, and generating early warning information according to the dangerous behavior level specifically includes:
acquiring a detection result of a dangerous behavior detection model, constructing a dangerous behavior evaluation standard system, judging a dangerous coefficient of a dangerous behavior according to the occurrence frequency of the dangerous behavior, the possibility of causing an accident and the severity after the accident, and acquiring an evaluation score according to the dangerous coefficient by the dangerous behavior evaluation standard system;
generating weight information according to the geographical position information and the pedestrian volume information of the target area;
generating a comprehensive evaluation score of dangerous behaviors in a target region according to the weight information and the evaluation score, presetting a comprehensive evaluation score threshold interval of the dangerous behaviors, and determining the grade information of the dangerous behaviors according to the interval to which the comprehensive evaluation score belongs;
and generating corresponding early warning information according to the grade information, sending and displaying the early warning information according to a preset method, and generating voice reminding information in the target area to remind the target object.
In the scheme, when the monitoring video stream information of the dangerous area is acquired, the environmental audio information is acquired, the sensitive word information is acquired according to the environmental audio information, and the early warning is performed according to the sensitive word information, which specifically comprises the following steps:
setting a preset region according to the distribution and area information of the dangerous region in the target region, judging whether a person enters the preset region, and starting the visual monitoring of the dangerous region when the person enters the preset region;
presetting reference image information of a dangerous area in a target area, acquiring monitoring frame image information of the dangerous area in the target area, and preprocessing according to the monitoring frame image information;
acquiring the contour features of the preprocessed monitoring frame image information according to an edge detection operator, and performing similarity calculation according to the contour features of the monitoring frame image information and the contour features in the reference image information;
comparing and judging the similarity with a preset similarity threshold, and if the similarity is smaller than the preset similarity threshold, proving that a person breaks into the dangerous area, and generating early warning information;
synchronously acquiring environment audio information, identifying the environment audio information to acquire an effective waveband, judging whether a voice signal exists or not according to the effective waveband, identifying the voice signal and converting the voice signal into text information;
judging whether the text information contains sensitive words or not, if so, acquiring the occurrence frequency of the sensitive words in the text information, and when the occurrence frequency is greater than a preset occurrence frequency threshold, generating early warning information and displaying the early warning information according to a preset method.
The second aspect of the present invention also provides a system for detecting dangerous behaviors based on machine vision and deep learning, where the system includes: the dangerous behavior detection method based on machine vision and deep learning comprises a memory and a processor, wherein the memory comprises a dangerous behavior detection method program based on machine vision and deep learning, and when the processor executes the dangerous behavior detection method program based on machine vision and deep learning, the following steps are realized:
acquiring monitoring video stream information of a target area, acquiring frame image information according to the monitoring video stream information, and preprocessing the frame image information;
acquiring an interested area of the preprocessed frame image information, extracting features to identify dangerous goods, and determining an interactive relation between the dangerous goods and a target object;
constructing a dangerous behavior detection model based on a convolutional neural network model, and detecting dangerous behaviors in a target area according to the dangerous behavior detection model;
judging the dangerous behavior grade of a detected object according to the detection result of the dangerous behavior detection model, and generating early warning information according to the dangerous behavior grade;
meanwhile, a dangerous area in the target area is defined, environmental audio information is obtained when monitoring video stream information of the dangerous area is obtained, sensitive word information is obtained according to the environmental audio information, and early warning is carried out according to the sensitive word information.
In this scheme, the acquiring of the surveillance video stream information of the target area, acquiring frame image information according to the surveillance video stream information, and preprocessing the frame image information specifically include:
dividing monitoring video stream information of a target area, identifying frame image information containing portrait information, and taking the frame image information containing portrait as key frame image information;
ashing and denoising the key frame image information, filtering a background image based on an edge algorithm, and acquiring an interested area in the key frame image information;
and acquiring an optical flow vector in the key frame image information to generate optical flow characteristics.
In the scheme, dangerous goods identification is carried out, and the interactive relation between the dangerous goods and the target object is determined, specifically:
acquiring and identifying dangerous goods based on a fast RCNN model, acquiring feature map information of an interest region in frame image information through VGG16, performing convolution on the feature map information through an RPN (resilient packet network), generating suggested regions with different scales, and performing anchor frame regression;
performing pooling operation on the proposed area, judging whether dangerous goods are contained or not through a full connection layer and a Softmax classifier, acquiring category information of the dangerous goods, acquiring position offset of an anchor frame through anchor frame regression again, and generating an accurate dangerous goods target anchor frame;
when the dangerous goods are detected, acquiring position information of key points of hands of the target object in the frame image information, and according to the position information of the target anchor frame of the dangerous goods and the distance information of the key points of the hands of the target object;
when the distance information is smaller than a preset distance threshold value, judging that dangerous goods holding personnel exist in the target area and generating early warning information;
and when the distance information is greater than a preset distance threshold, marking the target object, and monitoring behavior information of the marked object in a key manner.
In this scheme, the method for detecting the dangerous behavior in the target area based on the convolutional neural network model includes the steps of constructing a dangerous behavior detection model based on the convolutional neural network model, and detecting the dangerous behavior in the target area according to the extracted features of the dangerous behavior detection model, specifically:
establishing a dangerous behavior detection model, setting initialization parameters, and training the dangerous behavior detection model through a dangerous behavior data set;
segmenting the monitoring video stream information of a target area, acquiring image characteristics and optical flow characteristics of frame image information in each segment of video stream information through sparse sampling, and taking the image characteristics and the optical flow characteristics as local characteristics of two characteristic categories;
aggregating various local features to obtain global features, converting the global features into classified feature vectors through multi-scale feature fusion, and obtaining dangerous behavior recognition results under corresponding feature categories according to the classified feature vectors;
and weighting the dangerous behavior recognition results corresponding to the local features of the two feature categories according to a preset weight to generate a final dangerous behavior prediction result.
In this scheme, the determining the dangerous behavior level of the detected object according to the detection result of the dangerous behavior detection model, and generating the early warning information according to the dangerous behavior level specifically includes:
acquiring a detection result of a dangerous behavior detection model, constructing a dangerous behavior evaluation standard system, judging a dangerous coefficient of a dangerous behavior according to the occurrence frequency of the dangerous behavior, the possibility of causing an accident and the severity after the accident, and acquiring an evaluation score according to the dangerous coefficient through the dangerous behavior evaluation standard system;
generating weight information according to the geographical position information and the pedestrian volume information of the target area;
generating a comprehensive evaluation score of dangerous behaviors in a target region according to the weight information and the evaluation score, presetting a comprehensive evaluation score threshold interval of the dangerous behaviors, and determining the grade information of the dangerous behaviors according to the interval to which the comprehensive evaluation score belongs;
and generating corresponding early warning information according to the grade information, sending and displaying the early warning information according to a preset method, and generating voice reminding information in the target area to remind the target object.
In the scheme, when the monitoring video stream information of the dangerous area is acquired, the environmental audio information is acquired, the sensitive word information is acquired according to the environmental audio information, and the early warning is performed according to the sensitive word information, which specifically comprises the following steps:
setting a preset area according to the distribution and area information of the dangerous area in the target area, judging whether a person enters the preset area, and starting the visual monitoring of the dangerous area when the person enters the preset area;
presetting reference image information of a dangerous area in a target area, acquiring monitoring frame image information of the dangerous area in the target area, and preprocessing according to the monitoring frame image information;
acquiring the contour features of the preprocessed monitoring frame image information according to an edge detection operator, and performing similarity calculation according to the contour features of the monitoring frame image information and the contour features in the reference image information;
comparing and judging the similarity with a preset similarity threshold, and if the similarity is smaller than the preset similarity threshold, proving that a person breaks into the dangerous area, and generating early warning information;
synchronously acquiring environment audio information, identifying the environment audio information to acquire an effective waveband, judging whether a voice signal exists or not according to the effective waveband, identifying the voice signal and converting the voice signal into text information;
judging whether the text information contains sensitive words or not, if yes, acquiring the occurrence frequency of the sensitive words in the text information, and when the occurrence frequency is larger than a preset occurrence frequency threshold, generating early warning information and displaying the early warning information according to a preset method.
The invention discloses a dangerous behavior detection method and a dangerous behavior detection system for machine vision and deep learning, wherein the method comprises the following steps: acquiring monitoring video stream information of a target area, and acquiring frame image information according to the monitoring video stream information for preprocessing; acquiring an interested area of the preprocessed frame image information, extracting features to identify dangerous goods, and determining an interactive relation between the dangerous goods and a target object; constructing a dangerous behavior detection model to detect dangerous behaviors in a target area; judging the dangerous behavior grade of the detected object according to the detection result, and generating early warning information; the method comprises the steps of defining a dangerous area in a target area, obtaining environmental audio information when obtaining monitoring video stream information of the dangerous area, obtaining sensitive word information according to the environmental audio information, and carrying out early warning according to the sensitive word information. The method and the device can monitor and identify dangerous behaviors in the target area, ensure the identification precision and generate the early warning information at the same time, and realize the information sharing of the dangerous information in the target area.
Drawings
FIG. 1 is a flow chart of a dangerous behavior detection method based on machine vision and deep learning according to the present invention;
FIG. 2 illustrates a flow diagram of a method for hazardous behavior detection according to a hazardous behavior detection model of the present invention;
FIG. 3 is a flow chart of a method for performing early warning according to audio information of a dangerous area;
fig. 4 shows a block diagram of a dangerous behavior detection system based on machine vision and deep learning according to the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention, taken in conjunction with the accompanying drawings and detailed description, is set forth below. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
Fig. 1 shows a flowchart of a dangerous behavior detection method based on machine vision and deep learning according to the present invention.
As shown in fig. 1, a first aspect of the present invention provides a dangerous behavior detection method based on machine vision and deep learning, including:
s102, acquiring monitoring video stream information of a target area, acquiring frame image information according to the monitoring video stream information, and preprocessing the frame image information;
s104, acquiring an interested area of the preprocessed frame image information, extracting features to identify dangerous goods, and determining an interactive relation between the dangerous goods and a target object;
s106, constructing a dangerous behavior detection model based on the convolutional neural network model, and detecting dangerous behaviors in a target area according to the dangerous behavior detection model;
s108, judging the dangerous behavior grade of the detected object according to the detection result of the dangerous behavior detection model, and generating early warning information according to the dangerous behavior grade;
and S110, simultaneously, defining a dangerous area in the target area, acquiring environmental audio information when acquiring monitoring video stream information of the dangerous area, acquiring sensitive word information according to the environmental audio information, and performing early warning according to the sensitive word information.
It should be noted that, the acquiring of the surveillance video stream information of the target area, acquiring frame image information according to the surveillance video stream information, and preprocessing the frame image information specifically include: acquiring monitoring video stream information of a target through machine vision, segmenting the monitoring video stream information of a target area through OpenCV, identifying frame image information containing portrait information, and taking the frame image information containing portrait as key frame image information; ashing and denoising the key frame image information, filtering out a background image based on an edge algorithm, and acquiring an interested area in the key frame image information; and acquiring an optical flow vector in the key frame image information by an optical flow method to generate optical flow characteristics.
It should be noted that, identifying the dangerous goods and determining the interaction relationship between the dangerous goods and the target object specifically include: acquiring and identifying dangerous goods based on a Faster RCNN model, acquiring feature map information of an interest region in frame image information through VGG16, convolving the feature map information through an RPN (resilient packet network), generating suggested regions with different scales, and performing anchor frame regression; performing pooling operation on the proposed area, judging whether dangerous goods are contained or not through a full connection layer and a Softmax classifier, acquiring category information of the dangerous goods, and acquiring position offset of an anchor frame through anchor frame regression again to generate an accurate dangerous goods target anchor frame; when the dangerous goods are detected, acquiring position information of a hand key point of the target object in the frame image information, and according to the position information of the target anchor frame of the dangerous goods and the distance information of the hand key point of the target object; when the distance information is smaller than a preset distance threshold value, judging that dangerous goods holding personnel exist in the target area and generating early warning information; and when the distance information is greater than a preset distance threshold, marking the target object, and monitoring behavior information of the marked object in a key manner. Determining the interactive relation between the dangerous goods and the target object according to the position information of the dangerous goods target anchor frame and the distance information of the key points of the hands of the target object, early warning the personnel holding the dangerous goods, and if judging that the information of the dangerous goods is held by the target object, generating early warning information and carrying out key monitoring on the behaviors of the personnel.
Fig. 2 shows a flow chart of a method for performing dangerous behavior detection according to a dangerous behavior detection model.
According to the embodiment of the invention, the method for detecting the dangerous behavior in the target area based on the convolutional neural network model comprises the following steps of:
s202, establishing a dangerous behavior detection model, setting initialization parameters, and training the dangerous behavior detection model through a dangerous behavior data set;
s204, segmenting the monitoring video stream information of the target area, acquiring image features and optical flow features of frame image information in each segment of video stream information through sparse sampling, and taking the image features and the optical flow features as local features of two feature categories;
s206, aggregating various local features to obtain global features, converting the global features into classified feature vectors through multi-scale feature fusion, and obtaining dangerous behavior recognition results under corresponding feature categories according to the classified feature vectors;
and S208, weighting the dangerous behavior recognition results corresponding to the local features of the two feature categories according to preset weights to generate a final dangerous behavior prediction result.
It should be noted that a dangerous behavior detection model is established based on a double-current convolution network, the dangerous behavior detection model is trained through a UCF101 data set, different behavior data are selected according to the position of a target region and the site property of the target region to generate a training set, meanwhile, in order to improve the richness of the data and enhance the generalization capability of the model, the training set is subjected to data enhancement, the original training set is subjected to quantity enhancement processing, when global features are obtained, aggregation features are generated through feature aggregation, the global features are obtained according to the aggregation features, and the identification and detection results of the types of features are obtained through a classifier of the dangerous behavior detection model.
It should be noted that the determining, according to the detection result of the dangerous behavior detection model, the dangerous behavior level of the detection object, and generating the early warning information according to the dangerous behavior level specifically includes: acquiring a detection result of a dangerous behavior detection model, constructing a dangerous behavior evaluation standard system, and judging a dangerous coefficient of a dangerous behavior according to the occurrence frequency of the dangerous behavior, the possibility of causing an accident and the severity after the accident, wherein the occurrence frequency of the dangerous behavior, the possibility of causing the accident and the severity after the accident are respectively preset, and basic scores of different grades are given; obtaining an evaluation score according to the risk coefficient through a risk behavior evaluation standard system; generating weight information according to the geographical position information and the pedestrian volume information of the target area; generating a comprehensive evaluation score of the dangerous behaviors in the target area according to the weight information and the evaluation score, presetting a threshold interval of the comprehensive evaluation score of the dangerous behaviors, and determining the grade information of the dangerous behaviors according to the interval to which the comprehensive evaluation score belongs; and generating corresponding early warning information according to the grade information, sending and displaying the early warning information according to a preset method, and generating voice reminding information in the target area to remind the target object.
Fig. 3 shows a flow chart of the method for early warning according to the audio information of the dangerous area.
According to the embodiment of the invention, when the monitoring video stream information of the dangerous area is obtained, the environment audio information is obtained, the sensitive word information is obtained according to the environment audio information, and the early warning is carried out according to the sensitive word information, which specifically comprises the following steps:
s302, setting a preset region according to the distribution and area information of the dangerous region in the target region, judging whether a person enters the preset region, and starting the visual monitoring of the dangerous region when the person enters the preset region;
s304, presetting reference image information of a dangerous area in a target area, acquiring monitoring frame image information of the dangerous area in the target area, and preprocessing according to the monitoring frame image information;
s306, acquiring the contour features of the preprocessed monitoring frame image information according to an edge detection operator, and performing similarity calculation according to the contour features of the monitoring frame image information and the contour features in the reference image information;
s308, comparing and judging the similarity with a preset similarity threshold, and if the similarity is smaller than the preset similarity threshold, proving that people break into the dangerous area, and generating early warning information;
s310, synchronously acquiring environment audio information, identifying the environment audio information to acquire an effective waveband, judging whether a voice signal exists or not according to the effective waveband, identifying the voice signal and converting the voice signal into text information;
s312, judging whether the text information contains sensitive words or not, if yes, acquiring the occurrence frequency of the sensitive words in the text information, and when the occurrence frequency is larger than a preset occurrence frequency threshold, generating early warning information and displaying the early warning information according to a preset method.
According to the embodiment of the invention, the method further comprises the following steps of generating the early warning information and then sending the early warning information to security personnel according to a distance priority principle, wherein the steps are as follows:
when dangerous behaviors are detected in the target area or dangerous conditions occur in the dangerous area, generating early warning information and acquiring position information of a monitored target dangerous source in the target area;
acquiring current position distribution of security personnel in a target area, acquiring the distance between the position of the security personnel and position information of a monitored target hazard source in the target area, and presetting a distance range threshold;
sending emergency early warning information to security personnel within a preset distance range threshold value, and guiding the security personnel to go to an area where a danger source is located for processing;
sending low-grade early warning information to security personnel outside a preset distance range threshold, acquiring the position of the security personnel and the relative position information of each safety channel in the target area, and distributing the security personnel to evacuate people in the target area according to the relative position information.
According to the embodiment of the present invention, the method further includes generating personnel evacuation information according to the position of the dangerous source in the target area, specifically:
when the dangerous behavior detection model detects that dangerous behaviors exist in the target area, acquiring position information of a dangerous source, dividing the target area into a plurality of sub-areas, and estimating the crowd quantity information of each sub-area in the target area;
judging a dangerous area according to the position information of the dangerous source, and planning evacuation routes of different areas according to the number information of people in each sub-area, the dangerous area and the position information of the safe channel in the target area;
predicting the crowding degree of each evacuation route, selecting the evacuation route with the lowest crowding degree as a final evacuation route, and calling an emergency command system to evacuate personnel through broadcasting;
and acquiring the pedestrian volume information of each safety channel during evacuation, and if the pedestrian volume information of the current safety channel is larger than a preset pedestrian volume threshold value, acquiring the safety channel position information with small pedestrian volume according to the real-time pedestrian volume information of each safety channel, generating evacuation adjustment information, sending the evacuation adjustment information to security personnel, and performing evacuation adjustment.
It should be noted that the evacuation route of the field personnel in the target area is obtained through the prediction of the crowding degree, so that the evacuation chaos caused by panic when the field personnel face dangerous conditions is effectively avoided, and the high efficiency and the orderliness of the evacuation are ensured.
The high-risk personnel are the personnel with existing or potential harm in social security order and public safety, and the database of the high-risk personnel for security comprises the data provided by the social security management department or the data of the dangerous personnel archived in the community
Fig. 4 shows a block diagram of a dangerous behavior detection system based on machine vision and deep learning according to the present invention.
The second aspect of the present invention also provides a dangerous behavior detection system 4 based on machine vision and deep learning, the system including: a memory 41 and a processor 42, wherein the memory includes a dangerous behavior detection method program based on machine vision and deep learning, and when executed by the processor, the dangerous behavior detection method program based on machine vision and deep learning implements the following steps:
acquiring monitoring video stream information of a target area, acquiring frame image information according to the monitoring video stream information, and preprocessing the frame image information;
acquiring an interested area of the preprocessed frame image information, extracting features to identify dangerous goods, and determining an interactive relation between the dangerous goods and a target object;
constructing a dangerous behavior detection model based on a convolutional neural network model, and detecting dangerous behaviors in a target area according to the dangerous behavior detection model;
judging the dangerous behavior grade of a detected object according to the detection result of the dangerous behavior detection model, and generating early warning information according to the dangerous behavior grade;
meanwhile, a dangerous area in the target area is defined, environmental audio information is obtained when monitoring video stream information of the dangerous area is obtained, sensitive word information is obtained according to the environmental audio information, and early warning is carried out according to the sensitive word information.
It should be noted that, the acquiring of the surveillance video stream information of the target area, acquiring frame image information according to the surveillance video stream information, and preprocessing the frame image information specifically include: acquiring monitoring video stream information of a target through machine vision, segmenting the monitoring video stream information of a target area through OpenCV, identifying frame image information containing portrait information, and taking the frame image information containing portrait as key frame image information; ashing and denoising the key frame image information, filtering a background image based on an edge algorithm, and acquiring an interested area in the key frame image information; and acquiring an optical flow vector in the key frame image information by an optical flow method to generate optical flow characteristics.
It should be noted that, identifying the dangerous goods and determining the interaction relationship between the dangerous goods and the target object specifically include: acquiring and identifying dangerous goods based on a Faster RCNN model, acquiring feature map information of an interest region in frame image information through VGG16, convolving the feature map information through an RPN (resilient packet network), generating suggested regions with different scales, and performing anchor frame regression; performing pooling operation on the proposed area, judging whether dangerous goods are contained or not through a full connection layer and a Softmax classifier, acquiring category information of the dangerous goods, and acquiring position offset of an anchor frame through anchor frame regression again to generate an accurate dangerous goods target anchor frame; when the dangerous goods are detected, acquiring position information of a hand key point of the target object in the frame image information, and according to the position information of the target anchor frame of the dangerous goods and the distance information of the hand key point of the target object; when the distance information is smaller than a preset distance threshold value, judging that dangerous goods holding personnel exist in the target area and generating early warning information; and when the distance information is greater than a preset distance threshold, marking the target object, and monitoring behavior information of the marked object in a key manner. Determining the interactive relation between the dangerous goods and the target object according to the position information of the dangerous goods target anchor frame and the distance information of the key points of the hands of the target object, early warning the personnel holding the dangerous goods, and if judging that the information of the dangerous goods is held by the target object, generating early warning information and carrying out key monitoring on the behaviors of the personnel.
According to the embodiment of the invention, the method for detecting the dangerous behavior in the target area based on the convolutional neural network model comprises the following steps of:
establishing a dangerous behavior detection model, setting initialization parameters, and training the dangerous behavior detection model through a dangerous behavior data set;
segmenting the monitoring video stream information of a target area, acquiring image characteristics and optical flow characteristics of frame image information in each segment of video stream information through sparse sampling, and taking the image characteristics and the optical flow characteristics as local characteristics of two characteristic categories;
aggregating various local features to obtain global features, converting the global features into classified feature vectors through multi-scale feature fusion, and obtaining dangerous behavior recognition results under corresponding feature categories according to the classified feature vectors;
and weighting the dangerous behavior recognition results corresponding to the local features of the two feature categories according to a preset weight to generate a final dangerous behavior prediction result.
It should be noted that a dangerous behavior detection model is established based on a double-current convolution network, the dangerous behavior detection model is trained through a UCF101 data set, different behavior data are selected according to the position of a target region and the site property of the target region to generate a training set, meanwhile, in order to improve the richness of the data and enhance the generalization capability of the model, the training set is subjected to data enhancement, the original training set is subjected to quantity enhancement processing, when global features are obtained, aggregation features are generated through feature aggregation, the global features are obtained according to the aggregation features, and the identification and detection results of the types of features are obtained through a classifier of the dangerous behavior detection model.
It should be noted that, the determining the dangerous behavior level of the detection object according to the detection result of the dangerous behavior detection model, and generating the early warning information according to the dangerous behavior level specifically include: acquiring a detection result of a dangerous behavior detection model, constructing a dangerous behavior evaluation standard system, and judging a dangerous coefficient of a dangerous behavior according to the occurrence frequency of the dangerous behavior, the possibility of causing an accident and the severity after the accident, wherein the occurrence frequency of the dangerous behavior, the possibility of causing the accident and the severity after the accident are respectively preset, and basic scores of different grades are given; obtaining an evaluation score according to the risk coefficient through a risk behavior evaluation standard system; generating weight information according to the geographical position information and the pedestrian volume information of the target area; generating a comprehensive evaluation score of the dangerous behaviors in the target area according to the weight information and the evaluation score, presetting a threshold interval of the comprehensive evaluation score of the dangerous behaviors, and determining the grade information of the dangerous behaviors according to the interval to which the comprehensive evaluation score belongs; and generating corresponding early warning information according to the grade information, sending and displaying the early warning information according to a preset method, and generating voice reminding information in the target area to remind the target object.
According to the embodiment of the invention, when acquiring the monitoring video stream information of the dangerous area, the environment audio information is acquired, the sensitive word information is acquired according to the environment audio information, and the early warning is performed according to the sensitive word information, which specifically comprises the following steps:
setting a preset region according to the distribution and area information of the dangerous region in the target region, judging whether a person enters the preset region, and starting the visual monitoring of the dangerous region when the person enters the preset region;
presetting reference image information of a dangerous area in a target area, acquiring monitoring frame image information of the dangerous area in the target area, and preprocessing according to the monitoring frame image information;
acquiring the contour features of the preprocessed monitoring frame image information according to an edge detection operator, and performing similarity calculation according to the contour features of the monitoring frame image information and the contour features in the reference image information;
comparing and judging the similarity with a preset similarity threshold, and if the similarity is smaller than the preset similarity threshold, proving that a person breaks into the dangerous area, and generating early warning information;
synchronously acquiring environment audio information, identifying the environment audio information to acquire effective wave bands, judging whether a voice signal exists or not through the effective wave bands, identifying the voice signal and converting the voice signal into text information;
judging whether the text information contains sensitive words or not, if yes, acquiring the occurrence frequency of the sensitive words in the text information, and when the occurrence frequency is larger than a preset occurrence frequency threshold, generating early warning information and displaying the early warning information according to a preset method.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps of implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer-readable storage medium, and when executed, executes the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media capable of storing program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present invention, and shall cover the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A dangerous behavior detection method based on machine vision and deep learning is characterized by comprising the following steps:
acquiring monitoring video stream information of a target area, acquiring frame image information according to the monitoring video stream information, and preprocessing the frame image information;
acquiring an interested area of the preprocessed frame image information, extracting features to identify dangerous goods, and determining an interactive relation between the dangerous goods and a target object;
constructing a dangerous behavior detection model based on a convolutional neural network model, and detecting dangerous behaviors in a target area according to the dangerous behavior detection model;
judging the dangerous behavior grade of the detected object according to the detection result of the dangerous behavior detection model, and generating early warning information according to the dangerous behavior grade;
meanwhile, a dangerous area in the target area is defined, environmental audio information is obtained when monitoring video stream information of the dangerous area is obtained, sensitive word information is obtained according to the environmental audio information, and early warning is carried out according to the sensitive word information.
2. The method according to claim 1, wherein the method comprises obtaining surveillance video stream information of a target area, obtaining frame image information according to the surveillance video stream information, and preprocessing the frame image information, and specifically comprises:
dividing monitoring video stream information of a target area, identifying frame image information containing portrait information, and taking the frame image information containing portrait as key frame image information;
ashing and denoising the key frame image information, filtering out a background image based on an edge algorithm, and acquiring an interested area in the key frame image information;
and acquiring an optical flow vector in the key frame image information to generate optical flow characteristics.
3. The dangerous behavior detection method based on machine vision and deep learning according to claim 1, wherein dangerous goods identification is performed, and an interaction relationship between dangerous goods and a target object is determined, specifically:
acquiring and identifying dangerous goods based on a FasterRCNN model, acquiring feature map information of an interest region in frame image information through VGG16, performing convolution on the feature map information through an RPN (resilient packet network), generating suggested regions with different scales, and performing anchor frame regression;
performing pooling operation on the proposed area, judging whether dangerous goods are contained or not through a full connection layer and a Softmax classifier, acquiring category information of the dangerous goods, and acquiring position offset of an anchor frame through anchor frame regression again to generate an accurate dangerous goods target anchor frame;
when the dangerous goods are detected, acquiring position information of key points of hands of the target object in the frame image information, and according to the position information of the target anchor frame of the dangerous goods and the distance information of the key points of the hands of the target object;
when the distance information is smaller than a preset distance threshold value, judging that dangerous goods holding personnel exist in the target area and generating early warning information;
and when the distance information is greater than a preset distance threshold, marking the target object, and monitoring behavior information of the marked object in a key manner.
4. The method according to claim 1, wherein the method for detecting the dangerous behaviors based on machine vision and deep learning is characterized in that a dangerous behavior detection model is constructed based on a convolutional neural network model, and detection of the dangerous behaviors in a target region is performed according to extracted features of the dangerous behavior detection model, and specifically comprises:
establishing a dangerous behavior detection model, setting initialization parameters, and training the dangerous behavior detection model through a dangerous behavior data set;
segmenting monitoring video stream information of a target area, acquiring image characteristics and optical flow characteristics of frame image information in each segment of video stream information through sparse sampling, and taking the image characteristics and the optical flow characteristics as local characteristics of two characteristic categories;
aggregating various local features to obtain global features, converting the global features into classified feature vectors through multi-scale feature fusion, and obtaining dangerous behavior recognition results under corresponding feature categories according to the classified feature vectors;
and weighting the dangerous behavior recognition results corresponding to the local features of the two feature categories according to a preset weight to generate a final dangerous behavior prediction result.
5. The dangerous behavior detection method based on machine vision and deep learning of claim 1, wherein the dangerous behavior level of the detected object is judged according to the detection result of the dangerous behavior detection model, and the early warning information is generated according to the dangerous behavior level, specifically:
acquiring a detection result of a dangerous behavior detection model, constructing a dangerous behavior evaluation standard system, judging a dangerous coefficient of a dangerous behavior according to the occurrence frequency of the dangerous behavior, the possibility of causing an accident and the severity after the accident, and acquiring an evaluation score according to the dangerous coefficient through the dangerous behavior evaluation standard system;
generating weight information according to the geographical position information and the pedestrian volume information of the target area;
generating a comprehensive evaluation score of the dangerous behaviors in the target area according to the weight information and the evaluation score, presetting a threshold interval of the comprehensive evaluation score of the dangerous behaviors, and determining the grade information of the dangerous behaviors according to the interval to which the comprehensive evaluation score belongs;
and generating corresponding early warning information according to the grade information, sending and displaying the early warning information according to a preset method, and generating voice reminding information in the target area to remind the target object.
6. The dangerous behavior detection method based on machine vision and deep learning of claim 1, wherein environmental audio information is obtained when dangerous area monitoring video stream information is obtained, sensitive word information is obtained according to the environmental audio information, and early warning is performed according to the sensitive word information, specifically:
setting a preset area according to the distribution and area information of the dangerous area in the target area, judging whether a person enters the preset area, and starting the visual monitoring of the dangerous area when the person enters the preset area;
presetting reference image information of a dangerous area in a target area, acquiring monitoring frame image information of the dangerous area in the target area, and preprocessing according to the monitoring frame image information;
acquiring the contour features of the preprocessed monitoring frame image information according to an edge detection operator, and performing similarity calculation according to the contour features of the monitoring frame image information and the contour features in the reference image information;
comparing and judging the similarity with a preset similarity threshold, and if the similarity is smaller than the preset similarity threshold, proving that a person breaks into the dangerous area, and generating early warning information;
synchronously acquiring environment audio information, identifying the environment audio information to acquire an effective waveband, judging whether a voice signal exists or not according to the effective waveband, identifying the voice signal and converting the voice signal into text information;
judging whether the text information contains sensitive words or not, if so, acquiring the occurrence frequency of the sensitive words in the text information, and when the occurrence frequency is greater than a preset occurrence frequency threshold, generating early warning information and displaying the early warning information according to a preset method.
7. A dangerous behavior detection system based on machine vision and deep learning, the system comprising: the dangerous behavior detection method based on machine vision and deep learning comprises a memory and a processor, wherein the memory comprises a dangerous behavior detection method program based on machine vision and deep learning, and when the processor executes the dangerous behavior detection method program based on machine vision and deep learning, the following steps are realized:
acquiring monitoring video stream information of a target area, acquiring frame image information according to the monitoring video stream information, and preprocessing the frame image information;
acquiring an interested area of the preprocessed frame image information, extracting features to identify dangerous goods, and determining an interactive relation between the dangerous goods and a target object;
constructing a dangerous behavior detection model based on a convolutional neural network model, and detecting dangerous behaviors in a target area according to the dangerous behavior detection model;
judging the dangerous behavior grade of a detected object according to the detection result of the dangerous behavior detection model, and generating early warning information according to the dangerous behavior grade;
meanwhile, a dangerous area in the target area is defined, environmental audio information is obtained when monitoring video stream information of the dangerous area is obtained, sensitive word information is obtained according to the environmental audio information, and early warning is carried out according to the sensitive word information.
8. The dangerous behavior detection system based on machine vision and deep learning of claim 7, wherein the dangerous goods identification is performed to determine the interaction relationship between the dangerous goods and the target object, and specifically the dangerous goods identification is performed by:
acquiring and identifying dangerous goods based on a FasterRCNN model, acquiring feature map information of an interest region in frame image information through VGG16, performing convolution on the feature map information through an RPN (resilient packet network), generating suggested regions with different scales, and performing anchor frame regression;
performing pooling operation on the proposed area, judging whether dangerous goods are contained or not through a full connection layer and a Softmax classifier, acquiring category information of the dangerous goods, and acquiring position offset of an anchor frame through anchor frame regression again to generate an accurate dangerous goods target anchor frame;
when the dangerous goods are detected, acquiring position information of key points of hands of the target object in the frame image information, and according to the position information of the target anchor frame of the dangerous goods and the distance information of the key points of the hands of the target object;
when the distance information is smaller than a preset distance threshold value, judging that dangerous goods holding personnel exist in the target area and generating early warning information;
and when the distance information is greater than a preset distance threshold, marking the target object, and monitoring behavior information of the marked object in a key manner.
9. The dangerous behavior detection system based on machine vision and deep learning according to claim 7, wherein the dangerous behavior detection model is constructed based on a convolutional neural network model, and detection of dangerous behaviors in a target region is performed according to the dangerous behavior detection model through extracted features, specifically:
establishing a dangerous behavior detection model, setting initialization parameters, and training the dangerous behavior detection model through a dangerous behavior data set;
segmenting the monitoring video stream information of a target area, acquiring image characteristics and optical flow characteristics of frame image information in each segment of video stream information through sparse sampling, and taking the image characteristics and the optical flow characteristics as local characteristics of two characteristic categories;
aggregating various local features to obtain global features, converting the global features into classified feature vectors through multi-scale feature fusion, and obtaining dangerous behavior recognition results under corresponding feature categories according to the classified feature vectors;
and weighting the dangerous behavior recognition results corresponding to the local features of the two feature categories according to a preset weight to generate a final dangerous behavior prediction result.
10. The dangerous behavior detection system based on machine vision and deep learning of claim 7, wherein the system is characterized in that environmental audio information is obtained when dangerous area monitoring video stream information is obtained, sensitive word information is obtained according to the environmental audio information, and early warning is performed according to the sensitive word information, and specifically:
setting a preset region according to the distribution and area information of the dangerous region in the target region, judging whether a person enters the preset region, and starting the visual monitoring of the dangerous region when the person enters the preset region;
presetting reference image information of a dangerous area in a target area, acquiring monitoring frame image information of the dangerous area in the target area, and preprocessing according to the monitoring frame image information;
acquiring the contour features of the preprocessed monitoring frame image information according to an edge detection operator, and performing similarity calculation according to the contour features of the monitoring frame image information and the contour features in the reference image information;
comparing and judging the similarity with a preset similarity threshold, and if the similarity is smaller than the preset similarity threshold, proving that a person breaks into the dangerous area, and generating early warning information;
synchronously acquiring environment audio information, identifying the environment audio information to acquire an effective waveband, judging whether a voice signal exists or not according to the effective waveband, identifying the voice signal and converting the voice signal into text information;
judging whether the text information contains sensitive words or not, if so, acquiring the occurrence frequency of the sensitive words in the text information, and when the occurrence frequency is greater than a preset occurrence frequency threshold, generating early warning information and displaying the early warning information according to a preset method.
CN202210497047.8A 2022-05-09 2022-05-09 Dangerous behavior detection method and system based on machine vision and deep learning Withdrawn CN114782897A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210497047.8A CN114782897A (en) 2022-05-09 2022-05-09 Dangerous behavior detection method and system based on machine vision and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210497047.8A CN114782897A (en) 2022-05-09 2022-05-09 Dangerous behavior detection method and system based on machine vision and deep learning

Publications (1)

Publication Number Publication Date
CN114782897A true CN114782897A (en) 2022-07-22

Family

ID=82436937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210497047.8A Withdrawn CN114782897A (en) 2022-05-09 2022-05-09 Dangerous behavior detection method and system based on machine vision and deep learning

Country Status (1)

Country Link
CN (1) CN114782897A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082865A (en) * 2022-07-27 2022-09-20 国能大渡河检修安装有限公司 Bridge crane intrusion dangerous behavior early warning method and system based on visual image recognition
CN116740063A (en) * 2023-08-14 2023-09-12 山东众志电子有限公司 Glass fiber yarn production quality detection method based on machine vision
CN117172989A (en) * 2023-11-02 2023-12-05 武汉朱雀闻天科技有限公司 Intelligent campus management method and system based on big data
CN117315592A (en) * 2023-11-27 2023-12-29 四川省医学科学院·四川省人民医院 Identification early warning system based on robot end real-time monitoring camera shooting

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082865A (en) * 2022-07-27 2022-09-20 国能大渡河检修安装有限公司 Bridge crane intrusion dangerous behavior early warning method and system based on visual image recognition
CN115082865B (en) * 2022-07-27 2022-11-11 国能大渡河检修安装有限公司 Bridge crane intrusion dangerous behavior early warning method and system based on visual image recognition
CN116740063A (en) * 2023-08-14 2023-09-12 山东众志电子有限公司 Glass fiber yarn production quality detection method based on machine vision
CN116740063B (en) * 2023-08-14 2023-11-14 山东众志电子有限公司 Glass fiber yarn production quality detection method based on machine vision
CN117172989A (en) * 2023-11-02 2023-12-05 武汉朱雀闻天科技有限公司 Intelligent campus management method and system based on big data
CN117172989B (en) * 2023-11-02 2024-02-02 武汉朱雀闻天科技有限公司 Intelligent campus management method and system based on big data
CN117315592A (en) * 2023-11-27 2023-12-29 四川省医学科学院·四川省人民医院 Identification early warning system based on robot end real-time monitoring camera shooting
CN117315592B (en) * 2023-11-27 2024-01-30 四川省医学科学院·四川省人民医院 Identification early warning system based on robot end real-time monitoring camera shooting

Similar Documents

Publication Publication Date Title
CN114782897A (en) Dangerous behavior detection method and system based on machine vision and deep learning
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
WO2021051601A1 (en) Method and system for selecting detection box using mask r-cnn, and electronic device and storage medium
CN111462488A (en) Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model
CN112149512A (en) Helmet wearing identification method based on two-stage deep learning
CN115620212B (en) Behavior identification method and system based on monitoring video
CN113065474A (en) Behavior recognition method and device and computer equipment
CN111127507A (en) Method and system for determining throwing object
CN111696203A (en) Method, system, device and storage medium for pushing emergency in grading manner
CN113762229B (en) Intelligent identification method and system for building equipment in building site
CN114926791A (en) Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment
CN115272656A (en) Environment detection alarm method and device, computer equipment and storage medium
CN115359471A (en) Image processing and joint detection model training method, device, equipment and storage medium
You et al. PANDA: predicting road risks after natural disasters leveraging heterogeneous urban data
CN113850995A (en) Event detection method, device and system based on tunnel radar vision data fusion
CN111553199A (en) Motor vehicle traffic violation automatic detection technology based on computer vision
CN114913233A (en) Image processing method, apparatus, device, medium, and product
CN115810161A (en) Transformer substation fire identification method and system
CN114241401A (en) Abnormality determination method, apparatus, device, medium, and product
CN114241400A (en) Monitoring method and device of power grid system and computer readable storage medium
CN113609956A (en) Training method, recognition method, device, electronic equipment and storage medium
CN112861701A (en) Illegal parking identification method and device, electronic equipment and computer readable medium
CN112861711A (en) Regional intrusion detection method and device, electronic equipment and storage medium
CN112633163A (en) Detection method for realizing illegal operation vehicle detection based on machine learning algorithm
CN117274917B (en) Monitoring data analysis method, system and storage medium based on Internet of things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220722