CN115205767A - Smoking behavior detection method, system and device - Google Patents

Smoking behavior detection method, system and device Download PDF

Info

Publication number
CN115205767A
CN115205767A CN202211125461.2A CN202211125461A CN115205767A CN 115205767 A CN115205767 A CN 115205767A CN 202211125461 A CN202211125461 A CN 202211125461A CN 115205767 A CN115205767 A CN 115205767A
Authority
CN
China
Prior art keywords
smoking
behavior
smoking behavior
detection
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211125461.2A
Other languages
Chinese (zh)
Inventor
梁秉豪
袁明明
王凯
王涛
倪健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Communication Information System Co Ltd
Original Assignee
Inspur Communication Information System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Communication Information System Co Ltd filed Critical Inspur Communication Information System Co Ltd
Priority to CN202211125461.2A priority Critical patent/CN115205767A/en
Publication of CN115205767A publication Critical patent/CN115205767A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a smoking behavior detection method, a system and a device, belonging to the technical field of artificial intelligence analysis, and the method comprises the following implementation steps: 1. acquiring video stream data, decoding and framing to obtain a single-frame image, identifying a human body and cigarettes in the image through a smoking behavior detection algorithm based on target detection, and respectively obtaining position coordinates and probabilities of the human body and the cigarettes; 2. preliminarily calculating the probability of smoking behavior in the image according to the result obtained in the step 1, judging the next step when the probability is greater than a set threshold value, and otherwise, determining that the smoking behavior does not occur; 3. and 2, when the suspected smoking behavior is judged to occur in the step 2, acquiring video stream data, intercepting continuous frames, performing behavior identification through a space-time diagram convolution technology based on a smoking behavior detection algorithm of behavior identification, and intercepting corresponding image continuous frames and punishing and alarming when the smoking behavior occurs. The method can accurately identify the smoking behavior, has high algorithm operation efficiency, and can obtain evidence of the smoking behavior.

Description

Smoking behavior detection method, system and device
Technical Field
The invention relates to the technical field of artificial intelligence analysis, in particular to a smoking behavior detection method, a system and a device.
Background
With the continuous development of service industry, the demand of people on Internet food and beverage services such as prefabricated dishes, takeaway and the like is increased day by day, and particularly in a special period inconvenient for eating in a hall, the Internet food and beverage services embody the importance of the Internet food and beverage services, and bring convenience to people. At present, based on an artificial intelligence video analysis technology, abnormal behaviors of kitchen workers are rapidly identified, and the method becomes a key direction of research in the industry. At present, the application of correctly wearing chef caps, masks and the like is mature for whether kitchen workers correctly wear chef clothes, but the level of supervision on illegal smoking behaviors is lower. Because the cigarette target is less, and the action of smoking has certain disguise simultaneously, often difficult discernment, and the action of smoking itself may be greater than actions such as incorrect chef clothes of wearing by oneself to food safety far away, therefore urgent demand an effective and accurate means supervise it.
The existing smoking behavior detection technology is mainly divided into three categories, one category is that additional devices such as a smoke alarm and an infrared camera are adopted to identify smoking behaviors, the method has a good effect, but the method is high in cost and poor in deployment flexibility and is often rarely adopted; the other type is based on a target detection technology, the smoking behavior in a single-frame image is integrally identified, the method is high in operation efficiency, but the handheld cigarette and the smoking behavior are difficult to distinguish, the evidence of the smoking behavior is accurately obtained, meanwhile, the amount of information input by the model is small, and the accuracy rate is difficult to further improve; the last type is to find smoking behavior based on time sequence information in continuous multi-frame images, the method has relatively high accuracy, but the detected overall behavior characteristics are easily interfered by other similar behaviors (such as mouth cleaning, tooth picking and the like).
Disclosure of Invention
The technical task of the invention is to provide a smoking behavior detection method, a system and a device aiming at the defects, which can accurately identify the smoking behavior, have high algorithm operation efficiency and can obtain evidence and alarm for the smoking behavior.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a smoking behavior detection method comprises the following implementation steps:
1) Acquiring video stream data, decoding and framing to obtain a single-frame image, identifying a human body and cigarettes in the image through a smoking behavior detection algorithm based on target detection, and respectively obtaining position coordinates and probabilities of the human body and the cigarettes;
2) Preliminarily calculating the probability of smoking behavior in the image according to the result obtained in the step 1), and judging the next step when the probability is greater than a set threshold value, otherwise, determining that no smoking behavior occurs;
3) And when the suspected smoking behavior is judged to appear in the step 2), acquiring video stream data, intercepting continuous frames, carrying out behavior identification through a space-time diagram convolution technology based on a smoking behavior detection algorithm of behavior identification, and intercepting corresponding image continuous frames and punishing for warning when the smoking behavior appears.
The method effectively solves the problem that the traditional smoking detection method needs to rely on hardware facilities such as a smoke alarm, an infrared camera and the like, and also solves the problem that the traditional neural network method has lower accuracy in detecting small targets, thereby improving the accuracy in detecting smoking behaviors; the target detection technology, the key point position detection technology and the graph convolution technology are fused, and the problems that evidence obtaining is difficult and similar behaviors are easy to interfere in the smoking behavior generation process in the prior art are solved.
Preferably, the smoking behavior detection algorithm based on target detection performs model training and optimization by the following method:
data labeling: only selecting a human body image containing the handheld cigarette as a training data set of the model, and respectively labeling the human body position and the cigarette position in the image; by adopting the training data set, model bias can be artificially introduced, so that the model is more sensitive to the human target of the handheld cigarette and the cigarette target of the handheld cigarette;
model training: performing model pre-training based on a COCO 2017 data set to obtain a classifier C, and then performing migration learning and model local fine tuning by using a self-collected and labeled smoking data set to obtain a classifier C' serving as a smoking behavior detection algorithm based on target detection;
and (3) testing a model: based on the Success Rate of ASR (Attack Success Rate) calculation bias introduction, the calculation formula is as follows:
Figure 100002_DEST_PATH_IMAGE002
where C and C' are two classifiers before and after the fine adjustment, x is the input image data, and y is the class label predicted by the classifier.
Preferably, the smoking behavior detection algorithm based on behavior recognition obtains 10 key points of the upper limb of the human body for behavior recognition through human key point detection, and includes: nose, left and right ears, left and right shoulders, left and right elbows, and left and right wrists;
the manner of performing behavior recognition by the space-time graph convolution technique is as follows:
with 10 by 10 abutting matrices
Figure 100002_DEST_PATH_IMAGE004
Representing key points of a human body, setting values corresponding to the key points i and j which are connected with each other as 1, otherwise, setting the values as 0; and setting the adjacency matrix value of the nose and wrist in the adjacency matrix to 1;
taking the adjacency matrix of each frame image as input, and then predicting whether smoking behavior occurs based on ST-GCN algorithm.
10 key point positions (nose, left and right ears, left and right shoulders, left and right elbows, left and right wrists) of the upper limbs of the human body are obtained through detection of the key point positions of the human body and used for behavior recognition, and the operation amount of the model is greatly reduced while the influence of lower limb movement on smoking behavior recognition is weakened. Considering the behavior that the wrist will be close to the nose when smoking, the abutting relationship of the nose and the wrist in the abutting matrix is set to 1.
Further, the 10 key point locations are marked: nose lose, neck cock, left eye LE, right eye RE, left shoulder LS, right shoulder RS, left elbow LEL, right elbow REL, left wrist LW, right wrist RW;
forming said 10 x 10 contiguous matrix with said indicia
Figure 100002_DEST_PATH_IMAGE005
Preferably, 18 key point locations of the human body appearing in the continuous frame images are identified based on an openpos algorithm, and 10 key point locations of the upper limbs of the human body are obtained.
Preferably, the mode of acquiring the video stream data includes camera acquisition or video cloud platform acquisition.
The invention also claims a smoking behavior detection system, which comprises a video frame extraction module, a smoking detection module based on target detection, a smoking detection module based on behavior recognition, a smoking behavior detection module, a screenshot module and an alarm module,
a video frame extraction module: the video decoder is used for acquiring video stream data, decoding and frame extracting to obtain a single frame or continuous frame image as the input of an algorithm;
the smoking detection module based on target detection: preliminarily acquiring a spatial position and probability of occurrence of suspected smoking behaviors based on a target detection algorithm;
smoking detection module based on action discernment: further detecting the spatial position and probability of the smoking behavior based on human body key point identification and a space-time graph convolutional neural network algorithm;
smoking behavior detection module: judging whether smoking behaviors occur or not based on the recognition results of the target detection-based smoking detection module and the behavior recognition-based smoking detection module and a set threshold value, and acquiring the position where the smoking behaviors occur;
a screenshot and warning module: and returning the position information and the probability of the smoking behavior when the smoking behavior occurs, and screenshot continuous frames of the smoking behavior.
Preferably, the mode of performing model training and optimization by the target detection-based smoking behavior detection algorithm is as follows:
data annotation: only selecting a human body image containing the handheld cigarette as a training data set of the model, and respectively labeling the human body position and the cigarette position in the image; by adopting the training data set, model bias can be artificially introduced, so that the model is more sensitive to the human target of the hand-held cigarette and the cigarette target of the hand-held cigarette;
model training: performing model pre-training based on a COCO 2017 data set to obtain a classifier C, and then performing migration learning and model local fine tuning by using a self-collected and labeled smoking data set to obtain a classifier C' serving as a smoking behavior detection algorithm based on target detection;
and (3) testing a model: based on the Success Rate of ASR (Attack Success Rate) calculation bias introduction, the calculation formula is as follows:
Figure DEST_PATH_IMAGE006
where C and C' are two classifiers before and after the fine adjustment, x is the input image data, and y is the class label predicted by the classifier.
Preferably, the smoking behavior detection algorithm based on behavior recognition obtains 10 key points of the upper limb of the human body for behavior recognition through human key point detection, and includes: nose, left and right ears, left and right shoulders, left and right elbows, and left and right wrists;
the behavior recognition by the space-time graph convolution technique is as follows:
with 10 by 10 abutting matrices
Figure 871419DEST_PATH_IMAGE004
Representing key points of a human body, setting values corresponding to the key points i and j which are connected with each other as 1, otherwise, setting the values as 0; and setting the adjacency matrix value of the nose and wrist in the adjacency matrix to 1;
taking the adjacency matrix of each frame image as input, and then predicting whether the smoking behavior occurs based on an ST-GCN algorithm.
The invention also claims a smoking behaviour detection device comprising: at least one memory and at least one processor;
the at least one memory to store a machine readable program;
the at least one processor is used for calling the machine readable program and executing the smoking behavior detection method.
Compared with the prior art, the smoking behavior detection method, the system and the device have the following beneficial effects:
the method or the system effectively solves the problem that the traditional algorithm is easy to wrongly judge the condition of objects such as a handheld suction pipe, a pen and the like as smoking by artificially introducing the model bias in the data selecting and labeling process, namely, the implicit correlation of the human body characteristic and the cigarette characteristic;
according to the method or the system, 10 key points of the upper half of the human body are selected as input, the continuous frame images are analyzed by a space-time graph convolution method, the calculated amount of the model is reduced, meanwhile, interference factors of behaviors of the lower half of the human body are eliminated, and the identification accuracy rate of smoking behaviors is higher;
in addition, the method or the system comprehensively utilizes a target detection algorithm, a human body key point identification algorithm and a spatiotemporal graph convolution algorithm to detect the smoking behavior, and improves the algorithm detection effect in a model cascade mode while ensuring the operation efficiency of the algorithm.
Drawings
FIG. 1 is a diagram of a smoking behavior detection method according to an embodiment of the present invention;
figure 2 is a diagram of an implementation of a smoking behavior detection system provided by an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific examples.
The embodiment of the invention provides a smoking behavior detection method, as shown in fig. 1, the method comprises the following implementation steps:
1) Acquiring video stream data from a camera or a video cloud platform and the like, decoding and framing to obtain a single-frame image, identifying a human body and cigarettes in the image through a smoking behavior detection algorithm based on target detection, and respectively obtaining position coordinates and probabilities of the human body and the cigarettes;
2) Preliminarily calculating the probability of smoking behavior in the image according to the result obtained in the step 1), and judging the next step when the probability is greater than a set threshold value, otherwise, determining that no smoking behavior occurs;
3) And when the suspected smoking behavior is judged in the step 2), acquiring video stream data from a camera or a video cloud platform and other ways, intercepting continuous frames, performing behavior identification through a space-time diagram convolution technology based on a smoking behavior detection algorithm of behavior identification, and intercepting corresponding image continuous frames and punishing an alarm when the smoking behavior occurs.
The smoking behavior detection algorithm based on target detection mainly performs model training and optimization through the following steps:
data labeling: only selecting a human body image containing the handheld cigarette as a training data set of the model, and respectively labeling the human body position and the cigarette position in the image; by adopting the training data set, model bias can be artificially introduced, so that the model is more sensitive to the human target of the handheld cigarette and the cigarette target of the handheld cigarette;
model training: performing model pre-training based on a COCO 2017 data set to obtain a classifier C, and then performing migration learning and model local fine tuning by using a self-collected and labeled smoking data set to obtain a classifier C' serving as a smoking behavior detection algorithm based on target detection;
and (3) testing a model: based on the Success Rate of ASR (Attack Success Rate) calculation bias introduction, the calculation formula is as follows:
Figure 234005DEST_PATH_IMAGE006
where C and C' are two classifiers before and after the fine adjustment, x is the input image data, and y is the class label predicted by the classifier.
The smoking behavior detection algorithm based on behavior recognition mainly realizes behavior detection in the following way:
human body key point location detection algorithm: identifying 18 key point positions of the human body appearing in the continuous frame images based on an OpenPose algorithm, and acquiring 10 key point positions of the upper limbs of the human body: a nose, left and right ears, left and right shoulders, left and right elbows, and left and right wrists for behavior recognition;
and (3) performing behavior identification through a time-space diagram convolution technology:
with 10 by 10 abutting matrices
Figure 517219DEST_PATH_IMAGE005
Representing key points of a human body, setting values corresponding to the key points i and j which are connected with each other as 1, otherwise, setting the values as 0; and considering the behavior that the wrist is close to the nose when smoking, the adjacency matrix value of the nose and the wrist in the adjacency matrix is also set to 1; taking the adjacency matrix of each frame image as input, and then predicting whether the smoking behavior occurs based on an ST-GCN algorithm.
Marking the 10 key point positions: nose no, neck, left eye LE, right eye RE, left shoulder LS, right shoulder RS, left elbow LEL, right elbow REL, left wrist LW, right wrist RW;
forming said 10 x 10 contiguous matrix with said indicia
Figure 11785DEST_PATH_IMAGE004
As shown in table 1.
TABLE 1 adjacency matrix
Figure 13239DEST_PATH_IMAGE004
Nose Neck LE RE LS RS LEL REL LW RW
Nose 1 1 1 1 0 0 0 0 1 1
Neck 1 1 0 0 1 1 0 0 0 0
LE 1 0 1 0 0 0 0 0 0 0
RE 1 0 0 1 0 0 0 0 0 0
LS 0 1 0 0 1 0 1 0 0 0
RS 0 1 0 0 0 1 0 1 0 0
LEL 0 0 0 0 1 0 1 0 1 0
REL 0 0 0 0 0 1 0 1 0 1
LW 1 0 0 0 0 0 1 0 1 0
RW 1 0 0 0 0 0 0 1 0 1
The method effectively solves the problem that the traditional smoking detection method needs hardware facilities such as a smoke alarm, an infrared camera and the like, and simultaneously solves the problem that the traditional neural network method has lower accuracy rate for detecting small targets, thereby improving the accuracy rate for detecting smoking behaviors; the target detection technology, the key point position detection technology and the graph convolution technology are fused, and the problems that evidence is difficult to obtain in the smoking behavior generation process and the smoking behavior generation process is easily interfered by similar behaviors in the prior art are solved.
An embodiment of the present invention further provides a smoking behavior detection system, as shown in fig. 2, including a video frame-taking module, a smoking detection module based on target detection, a smoking detection module based on behavior recognition, a smoking behavior detection module, a screenshot module, and an alarm module.
A video frame extraction module: the video streaming data acquisition module is used for acquiring video streaming data from a camera or a video cloud platform, decoding and frame extraction are carried out, and a single frame or continuous frame image is obtained and is used as the input of an algorithm;
the smoking detection module based on target detection: preliminarily acquiring a spatial position and probability of the suspected smoking behavior based on a target detection algorithm;
smoking detection module based on action discernment: further detecting the spatial position and probability of the smoking behavior based on human body key point identification and a space-time graph convolutional neural network algorithm;
smoking behavior detection module: judging whether smoking behavior occurs or not based on the recognition results of the smoking detection module based on the target detection and the smoking detection module based on the behavior recognition and a set threshold value, and acquiring the position where the smoking behavior occurs;
screenshot and warning module: and returning the position information and probability of the smoking behavior when the smoking behavior occurs, and screenshot continuous frames of the smoking behavior.
The smoking behavior detection algorithm based on target detection adopts the following model training and optimization modes:
data annotation: only selecting a human body image containing the handheld cigarette as a training data set of the model, and respectively labeling the human body position and the cigarette position in the image; by adopting the training data set, model bias can be artificially introduced, so that the model is more sensitive to the human target of the hand-held cigarette and the cigarette target of the hand-held cigarette;
model training: performing model pre-training based on a COCO 2017 data set to obtain a classifier C, and then performing migration learning and model local fine tuning by using a self-collected and labeled smoking data set to obtain a classifier C' serving as a target detection-based smoking behavior detection algorithm;
and (3) testing a model: based on the Success Rate of ASR (Attack Success Rate) calculation bias introduction, the calculation formula is as follows:
Figure 680981DEST_PATH_IMAGE006
where C and C' are two classifiers before and after the fine adjustment, x is the input image data, and y is the class label predicted by the classifier.
The smoking behavior detection algorithm based on behavior recognition obtains 10 key point positions of human upper limbs through human key point position detection for behavior recognition, and comprises the following steps: nose, left and right ears, left and right shoulders, left and right elbows, and left and right wrists;
the behavior recognition by the space-time graph convolution technique is as follows:
with 10 by 10 abutting matrices
Figure 84281DEST_PATH_IMAGE005
Representing key points of a human body, setting values corresponding to the key points i and j which are connected with each other as 1, otherwise, setting the values as 0; and setting the adjacency matrix value of the nose and wrist in the adjacency matrix to 1;
taking the adjacency matrix of each frame image as input, and then predicting whether smoking behavior occurs based on ST-GCN algorithm.
Adjacency matrix
Figure 546486DEST_PATH_IMAGE005
Table 1 shows the above examples, which include the following ten key points of the human body, namely Nose (Nose), neck (Neck), left Eye (LE), right Eye (RE), left Shoulder (LS), right Shoulder (RS), left Elbow (LEL), right Elbow (REL), left Wrist (LW), and Right Wrist (RW).
10 key point positions (a nose, a left ear, a right ear, a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist and a right wrist) of an upper limb of a human body are obtained through detection of the key point positions of the human body and used for behavior recognition, and the operation amount of the model is greatly reduced while the influence of lower limb movement on smoking behavior recognition is weakened. Meanwhile, the characteristic that the wrist is close to the nose during smoking is considered, so that the adjacency relation between the nose and the wrist in the adjacency matrix is set to be 1, and the accuracy of model identification is improved.
The embodiment of the invention also provides a smoking behavior detection device, which comprises: at least one memory and at least one processor;
the at least one memory to store a machine readable program;
the at least one processor is configured to invoke the machine-readable program to perform the smoking behavior detection method in the above embodiment.
The present invention can be easily implemented by those skilled in the art from the above detailed description. It should be understood, however, that the intention is not to limit the invention to the particular embodiments described. On the basis of the disclosed embodiments, a person skilled in the art can combine different technical features at will, thereby implementing different technical solutions.
In addition to the technical features described in the specification, the technology is known to those skilled in the art.

Claims (10)

1. A smoking behavior detection method is characterized in that the method comprises the following implementation steps:
1) Acquiring video stream data, decoding and framing to obtain a single-frame image, identifying a human body and cigarettes in the image through a smoking behavior detection algorithm based on target detection, and respectively obtaining position coordinates and probabilities of the human body and the cigarettes;
2) Preliminarily calculating the probability of smoking behavior in the image according to the result obtained in the step 1), and judging the next step when the probability is greater than a set threshold value, otherwise, determining that the smoking behavior does not occur;
3) And when the suspected smoking behavior is judged to appear in the step 2), acquiring video stream data, intercepting continuous frames, carrying out behavior identification through a space-time diagram convolution technology based on a smoking behavior detection algorithm of behavior identification, and intercepting corresponding image continuous frames and punishing for warning when the smoking behavior appears.
2. The smoking behavior detection method according to claim 1, wherein the smoking behavior detection algorithm based on target detection performs model training and optimization by the following method:
data labeling: only selecting a human body image containing the handheld cigarette as a training data set of the model, and respectively labeling the human body position and the cigarette position in the image;
model training: performing model pre-training based on a COCO 2017 data set to obtain a classifier C, and then performing migration learning and model local fine tuning by using a self-collected and labeled smoking data set to obtain a classifier C' serving as a smoking behavior detection algorithm based on target detection;
and (3) testing a model: based on the success rate introduced by the ASR calculation bias, the calculation formula is as follows:
Figure DEST_PATH_IMAGE002
where C and C' are two classifiers before and after the fine adjustment, x is the input image data, and y is the class label predicted by the classifier.
3. The smoking behavior detection method according to claim 1 or 2, wherein the smoking behavior detection algorithm based on behavior recognition obtains 10 key points of the upper limbs of the human body through detection of the key points of the human body for behavior recognition, and comprises: nose, left and right ears, left and right shoulders, left and right elbows, and left and right wrists;
the behavior recognition by the space-time graph convolution technique is as follows:
with 10 by 10 abutting matrices
Figure DEST_PATH_IMAGE004
Representing key points of a human body, setting the corresponding values of the key points i and j which are connected with each other as 1, otherwise, setting the values as 0; and setting the adjacency matrix value of the nose and wrist in the adjacency matrix to 1;
taking the adjacency matrix of each frame image as input, and then predicting whether the smoking behavior occurs based on an ST-GCN algorithm.
4. A smoking behaviour detection method according to claim 3, characterised by marking said 10 keypoints: nose no, neck, left eye LE, right eye RE, left shoulder LS, right shoulder RS, left elbow LEL, right elbow REL, left wrist LW, right wrist RW;
forming said 10 x 10 contiguous matrix with said indicia
Figure 950314DEST_PATH_IMAGE004
5. The smoking behavior detection method according to claim 3, wherein 18 key points of the human body appearing in the continuous frame images are identified based on an OpenPose algorithm, and 10 key points of the upper limb of the human body are obtained.
6. The smoking behavior detection method of claim 1, wherein the means for acquiring the video stream data comprises camera acquisition or video cloud platform acquisition.
7. A smoking behavior detection system is characterized by comprising a video frame extraction module, a smoking detection module based on target detection, a smoking detection module based on behavior recognition, a smoking behavior detection module, a screenshot module and an alarm module,
a video frame extraction module: the video decoder is used for acquiring video stream data, decoding and frame extracting to obtain a single frame or continuous frame image as the input of an algorithm;
the smoking detection module based on target detection: preliminarily acquiring a spatial position and probability of the suspected smoking behavior based on a target detection algorithm;
the smoking detection module based on behavior recognition: further detecting the spatial position and probability of the smoking behavior based on human body key point identification and a space-time graph convolutional neural network algorithm;
smoking behavior detection module: judging whether smoking behaviors occur or not based on the recognition results of the target detection-based smoking detection module and the behavior recognition-based smoking detection module and a set threshold value, and acquiring the position where the smoking behaviors occur;
screenshot and warning module: and returning the position information and the probability of the smoking behavior when the smoking behavior occurs, and screenshot continuous frames of the smoking behavior.
8. A smoking behavior detection system according to claim 7, wherein the target detection based smoking behavior detection algorithm is trained and optimized as follows:
data annotation: only selecting a human body image containing the handheld cigarette as a training data set of the model, and respectively labeling the human body position and the cigarette position in the image;
model training: performing model pre-training based on a COCO 2017 data set to obtain a classifier C, and then performing migration learning and model local fine tuning by using a self-collected and labeled smoking data set to obtain a classifier C' serving as a target detection-based smoking behavior detection algorithm;
and (3) testing a model: based on the success rate introduced by the ASR calculation bias, the calculation formula is as follows:
Figure DEST_PATH_IMAGE005
wherein C and C' are two classifiers before and after the fine tuning, x is the input image data, and y is the class label predicted by the classifier.
9. A smoking behavior detection system according to claim 7 or 8, wherein the smoking behavior detection algorithm based on behavior recognition obtains 10 key points of the human upper limb through detection of key points of the human body for behavior recognition, and comprises: nose, left and right ears, left and right shoulders, left and right elbows, and left and right wrists;
the manner of performing behavior recognition by the space-time graph convolution technique is as follows:
with 10 by 10 abutting matrices
Figure 70717DEST_PATH_IMAGE004
Representing key points of a human body, setting values corresponding to the key points i and j which are connected with each other as 1, otherwise, setting the values as 0; and setting the adjacency matrix value of the nose and wrist in the adjacency matrix to 1;
taking the adjacency matrix of each frame image as input, and then predicting whether the smoking behavior occurs based on an ST-GCN algorithm.
10. A smoking behaviour detection device, comprising: at least one memory and at least one processor;
the at least one memory to store a machine readable program;
the at least one processor, configured to invoke the machine readable program to perform the method of any of claims 1 to 6.
CN202211125461.2A 2022-09-16 2022-09-16 Smoking behavior detection method, system and device Pending CN115205767A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211125461.2A CN115205767A (en) 2022-09-16 2022-09-16 Smoking behavior detection method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211125461.2A CN115205767A (en) 2022-09-16 2022-09-16 Smoking behavior detection method, system and device

Publications (1)

Publication Number Publication Date
CN115205767A true CN115205767A (en) 2022-10-18

Family

ID=83573553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211125461.2A Pending CN115205767A (en) 2022-09-16 2022-09-16 Smoking behavior detection method, system and device

Country Status (1)

Country Link
CN (1) CN115205767A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472573A (en) * 2019-08-14 2019-11-19 北京思图场景数据科技服务有限公司 A kind of human body behavior analysis method, equipment and computer storage medium based on body key point
CN110532925A (en) * 2019-08-22 2019-12-03 西安电子科技大学 Driver Fatigue Detection based on space-time diagram convolutional network
CN112163469A (en) * 2020-09-11 2021-01-01 燊赛(上海)智能科技有限公司 Smoking behavior recognition method, system, equipment and readable storage medium
CN112257643A (en) * 2020-10-30 2021-01-22 天津天地伟业智能安全防范科技有限公司 Smoking behavior and calling behavior identification method based on video streaming
US20210073525A1 (en) * 2019-09-11 2021-03-11 Naver Corporation Action Recognition Using Implicit Pose Representations
CN112818919A (en) * 2021-02-24 2021-05-18 北京华宇信息技术有限公司 Smoking behavior recognition method and device
CN114663807A (en) * 2022-03-14 2022-06-24 浙江工业大学 Smoking behavior detection method based on video analysis
CN114913598A (en) * 2022-05-06 2022-08-16 昆明理工大学 Smoking behavior identification method based on computer vision
CN114937302A (en) * 2022-05-30 2022-08-23 苏州浪潮智能科技有限公司 Smoking identification method, device and equipment and computer readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472573A (en) * 2019-08-14 2019-11-19 北京思图场景数据科技服务有限公司 A kind of human body behavior analysis method, equipment and computer storage medium based on body key point
CN110532925A (en) * 2019-08-22 2019-12-03 西安电子科技大学 Driver Fatigue Detection based on space-time diagram convolutional network
US20210073525A1 (en) * 2019-09-11 2021-03-11 Naver Corporation Action Recognition Using Implicit Pose Representations
CN112163469A (en) * 2020-09-11 2021-01-01 燊赛(上海)智能科技有限公司 Smoking behavior recognition method, system, equipment and readable storage medium
CN112257643A (en) * 2020-10-30 2021-01-22 天津天地伟业智能安全防范科技有限公司 Smoking behavior and calling behavior identification method based on video streaming
CN112818919A (en) * 2021-02-24 2021-05-18 北京华宇信息技术有限公司 Smoking behavior recognition method and device
CN114663807A (en) * 2022-03-14 2022-06-24 浙江工业大学 Smoking behavior detection method based on video analysis
CN114913598A (en) * 2022-05-06 2022-08-16 昆明理工大学 Smoking behavior identification method based on computer vision
CN114937302A (en) * 2022-05-30 2022-08-23 苏州浪潮智能科技有限公司 Smoking identification method, device and equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN110738101B (en) Behavior recognition method, behavior recognition device and computer-readable storage medium
CN105095856B (en) Face identification method is blocked based on mask
CN111191616A (en) Face shielding detection method, device, equipment and storage medium
CN111062273B (en) Method for tracing, detecting and alarming remaining articles
CN109063625A (en) A kind of face critical point detection method based on cascade deep network
CN109002801B (en) Face shielding detection method and system based on video monitoring
CN113034397B (en) High-altitude parabolic detection method capable of realizing automatic tracing of self-adaptive track in multiple environments in real time
WO2020258978A1 (en) Object detection method and device
CN110176025B (en) Invigilator tracking method based on posture
CN109800665A (en) A kind of Human bodys' response method, system and storage medium
CN103945089A (en) Dynamic target detection method based on brightness flicker correction and IP camera
CN111401310B (en) Kitchen sanitation safety supervision and management method based on artificial intelligence
CN103413149A (en) Method for detecting and identifying static target in complicated background
CN107729811B (en) Night flame detection method based on scene modeling
CN114913598A (en) Smoking behavior identification method based on computer vision
CN107704818A (en) A kind of fire detection system based on video image
CN115205767A (en) Smoking behavior detection method, system and device
CN115331152B (en) Fire fighting identification method and system
CN114743154B (en) Work clothes identification method based on registration form and computer readable medium
CN110910428A (en) Real-time multi-target tracking method based on neural network
CN113554682B (en) Target tracking-based safety helmet detection method
CN115100249A (en) Intelligent factory monitoring system based on target tracking algorithm
CN115116130A (en) Call action recognition method, device, equipment and storage medium
CN113947795A (en) Mask wearing detection method, device, equipment and storage medium
CN111191575B (en) Naked flame detection method and system based on flame jumping modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20221018

RJ01 Rejection of invention patent application after publication