CN114783000B - Method and device for detecting dressing standard of worker in bright kitchen range scene - Google Patents
Method and device for detecting dressing standard of worker in bright kitchen range scene Download PDFInfo
- Publication number
- CN114783000B CN114783000B CN202210673405.6A CN202210673405A CN114783000B CN 114783000 B CN114783000 B CN 114783000B CN 202210673405 A CN202210673405 A CN 202210673405A CN 114783000 B CN114783000 B CN 114783000B
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- information
- posture
- data
- condition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image recognition, and provides a method and a device for detecting dressing standards of workers in bright kitchen light range scenes, wherein the method comprises a training stage, and the training stage comprises the following steps: acquiring an input image through a camera; identifying a target pedestrian frame in an input image, and acquiring pedestrian position information, pedestrian posture information and pedestrian clothing information of the target pedestrian frame, wherein the pedestrian clothing information comprises hat wearing conditions and apron wearing conditions; marking the obtained pedestrian position information, the obtained pedestrian posture information and the obtained pedestrian clothing information of the target pedestrian frame to form a data set, wherein the device comprises a marking module, and the marking module comprises a first unit and a second unit; the storage module is used for storing the labeled data samples and dividing the data samples into training data and testing data; and the acquisition module is used for encoding the input image by using the method.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to a method and a device for detecting dressing standards of workers in bright kitchen light oven scenes.
Background
A bright kitchen range refers to a form that a catering service provider adopts transparent glass, video and other modes to show the related processes of catering services to the public, and key parts and links of the catering services are placed under social supervision through a camera, an electronic display screen and other equipment.
In the kitchen, the dressing standard degree of workers can directly influence the food safety. However, in the face of a huge number of catering sites, manual supervision for 24 hours is difficult to continue. Therefore, an intelligent detection method is needed to monitor and alarm the substandard behavior in an unmanned situation.
At present, some dressing specification detection methods based on machine learning exist, and compared with the cases, the task has specificity due to the fact that the task is in a bright kitchen scene, and the task mainly shows the following factors:
1. the edge terminal is deployed, multiple paths of cameras are processed at the same time, and the requirements on robustness and accuracy of the alarm are high.
2. The shooting angle and the shooting height of the camera are uncertain, the indoor scene differentiation is large, and the data distribution is complex.
3. The postures and positions of the workers in the images are variable and dense, the shielding is serious, and the labeling is difficult.
4. The color and style of the work clothes are different greatly, and the simple characteristics such as texture and color can not represent whether the clothing meets the standard or not.
Generally, the algorithm capability based on machine learning largely depends on the quality and quantity of training data, and in consideration of the above factors, the data set construction under the scene of a large-scale bright kitchen light range has great difficulty, which further limits the accuracy and robustness of the general method.
In summary, in order to find out kitchen light scene, how to improve accuracy and robustness of alarm information is a technical problem to be solved urgently.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a method and a device for detecting dressing standards of workers in a bright kitchen and bright stove scene so as to improve the accuracy and robustness of alarm information in the bright kitchen and bright stove scene.
According to a first aspect of the invention, a method for detecting dressing specifications of workers in a bright kitchen light range scene is provided, which comprises a training stage, wherein the training stage comprises the following steps:
s1, acquiring an input image through a camera;
s2, identifying a target pedestrian frame in the input image, and acquiring pedestrian position information, pedestrian posture information and pedestrian clothing information of the target pedestrian frame, wherein the pedestrian position information comprises position point data, a shielding condition and an intensive condition, the pedestrian posture information comprises a body position condition, a head visible condition and an orientation condition between a human body and a camera, and the pedestrian clothing information comprises a cap wearing condition and an apron wearing condition;
and S3, marking the pedestrian position information, the pedestrian posture information and the pedestrian clothing information of the obtained target pedestrian frame to form a data set, marking all normative persons of the information as dressing standards, marking any non-normative person of the information as dressing non-normative persons and sending out warning information, and marking any uncertain person of the information as being not capable of judging.
Further, the sheltered condition includes being sheltered and sheltered, the dense condition includes not dense and dense, the posture condition includes sitting, standing and others, the head visible condition includes head visible, head invisible and others, the orientation condition between the human body and the camera includes front facing camera, back facing camera and others, the cap wearing condition includes cap wearing, cap not wearing and others, and the apron wearing condition includes apron wearing, apron not wearing and others.
Further, the training phase further comprises the steps of:
s4, training a pedestrian position model, a pedestrian posture model and a pedestrian clothing model according to the data set obtained in the step S2 and the neural network training model parameters, fitting pedestrians through the models, and representing the position feature codes, the posture feature codes and the decoration feature codes of the pedestrians;
s5, further coding the result represented in the step S4 to obtain a specific characteristic dimension;
and S6, inputting the characteristic dimension obtained in the step S5 into a classifier to obtain a predicted final alarm result.
Further, the training phase further comprises the steps of:
and S7, recording the model parameters on the test equipment, setting a threshold value, and giving an alarm effect through a test data verification algorithm.
Further, a testing phase is also included, the testing phase comprising the steps of:
s9, acquiring a scene input image through a camera;
s10, initializing a network structure, initializing a test program running environment, and initializing weight parameters stored in training;
s11, inputting the scene input image into a pedestrian position model to obtain a pedestrian position characteristic code, inputting the pedestrian position characteristic code and the scene input image into a pedestrian posture model and a pedestrian clothing model to obtain a pedestrian posture characteristic code and a pedestrian clothing characteristic code, and further coding by adopting the step S5 to obtain a specific characteristic dimension;
and S12, inputting the characteristic dimension obtained in the step S11 into a classifier to obtain alarm information.
Further, pedestrian position information is detected by adopting a target detection algorithm network YOLOX, pedestrian posture information and pedestrian clothing information are detected by adopting a classification network structure SWIN transform, and the classifier is a three-classification SVM classifier.
Furthermore, capturing the video stream of the camera by using a crawler technology, and performing frame extraction on the video to obtain an input image; or directly acquiring the image shot by the camera so as to acquire the input image.
According to a second aspect of the present invention, there is provided a detection apparatus comprising:
the labeling module comprises a first unit and a second unit, the first unit is used for crawling data and acquiring image information of a real scene, and the second unit is used for labeling the acquired image information as corresponding format data;
the storage module is used for storing the labeled data samples and dividing the data samples into training data and testing data;
the acquisition module is used for encoding the input image by using the method and converting the pedestrian position information, the pedestrian posture information and the pedestrian clothing information in the image into a characteristic vector; and
and the confirming module is used for finally judging whether to give out the alarm information.
Further, the method also comprises a construction module, wherein the construction module is used for constructing the algorithm model network based on the weight trained by the data.
Has the advantages that: according to the method for detecting the dressing standard of the worker in the bright kitchen scene, provided by the invention, the complex problem of the dressing standard is decomposed, and the pedestrian is coded into the specific multidimensional characteristic, so that the requirement of single model prediction on the complexity of a training sample and an algorithm model is avoided, and the accuracy and the robustness of output alarm are improved.
Drawings
FIG. 1 is a block diagram of a SVM classifier workflow;
FIG. 2 is a block flow diagram of the method of the present invention;
FIG. 3 is a block diagram of a model inference flow;
FIG. 4 is a block diagram of the apparatus of the present invention.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further described with the specific embodiments.
As shown in FIGS. 1-4, the invention provides a method for detecting dressing specifications of workers in bright kitchen lighting scenes, which comprises a training stage and a testing stage.
The training phase comprises the following specific steps:
step one, capturing a video stream of the camera by using a crawler technology, and framing the video to obtain an input image; or directly acquiring the image shot by the camera so as to acquire the input image.
And secondly, identifying a target pedestrian frame in the input image, and acquiring pedestrian position information, pedestrian posture information and pedestrian clothing information of the target pedestrian frame, wherein the pedestrian position information comprises coordinate data, width data, height data, shielding conditions and intensive conditions, the pedestrian posture information comprises body position conditions, head visible conditions and orientation conditions between a human body and a camera, and the pedestrian clothing information comprises cap wearing conditions and apron wearing conditions.
Marking the obtained pedestrian position information, the pedestrian posture information and the pedestrian clothing information of the target pedestrian frame to form a data set, marking all normators as dress standards, marking any non-normator as dress non-normative and sending out warning information, and marking any uncertain person as being not capable of judging.
The labeling format is generally as shown in the following table.
Wherein, each row corresponds to a pedestrian labeling result in the input image, and the labeling can be equivalent to a vector: 。
description of the parameters:showing the result of pedestrian labeling, x showing the column of the upper left corner, y showing the row of the upper left corner, w showing the width of the column of the pedestrian label, h showing the row height of the pedestrian label, f0 showing the shielding condition, f1 showing the dense condition, f2 showing the posture position condition, f3 showing the head visible condition, f4 showing the orientation condition, f5 showing the cap wearing condition, f6 showing the apron wearing condition, and f7 showing the dressing.
specifically, the occlusion condition (f 0): whether there is an environmental block, 0 (no block), 1 (block).
Specifically, dense case (f 1): dense, 0 (not dense), 1 (dense).
specifically, the posture (f 2): the pedestrian is sitting or standing, 0 (sitting), 1 (standing), 2 (other).
Specifically, the head (f 3): whether header information is contained, 0 (header can be observed), 1 (header cannot be observed), 2 (others).
Specifically, orientation (f 4): the front and back of the person facing the camera, 0 (front facing the camera), 1 (back facing the camera), 2 (others).
specifically, cap (f 5): whether a hat is worn, 0 (hat worn), 1 (no), 2 (others).
Specifically, apron (f 6): whether to wear an apron, 0 (wear an apron), 1 (not wear), 2 (other).
Specifically, dressing (f 7): 0 (dressing standard), 1 (dressing not standard), 2 (not judged).
Wherein only the sample marked 1 needs to give an alarm.
And thirdly, marking data, storing marking results and establishing a data set in a real scene. The method comprises the steps of firstly marking the position information of the position of a worker, directly marking the pedestrians in the position information which are dense and sheltered as being unable to judge, and not carrying out subsequent marking. And marking the posture information of the pedestrian, similarly, marking the situation that the head, the other situations and the like cannot be observed as being still unable to be judged, and not carrying out subsequent marking. And finally, according to the clothing information of the pedestrian, marking results meeting the specification and not meeting the specification are calibrated.
And step four, defining an algorithm model structure, particularly, aiming at the detection of the position of the pedestrian, adopting a target detection algorithm network YOLOX which is universal in the industry, aiming at the detection of the posture of the pedestrian and the clothing, adopting a classification network structure Swin Transformer which is universal in the industry, aiming at the final judgment of the dressing standard, adopting a Support Vector Machine (SVM), a random forest and the like (SVM), and totaling 4 models.
Step five, training the following 3 models by using training set data and neural network training model parameters: the Model _ face, model _ position, model _ stress, and Model _ address models, for example, an input image (image), fit a pedestrian through a Model, and represent a position feature code, a posture feature code, and a decoration feature code of the pedestrian.
The position feature coding of the pedestrian in the input image:
description of the parameters: p _ cls represents the pedestrian probability, P _ c0 represents normal, P _ c1 represents an occluded pedestrian, P _ c2 represents a dense pedestrian, and [ xi, yi, wi, hi ] represents the pedestrian position.
Then, the pedestrian pose model and the pedestrian apparel model are trained in the same way, only with pedestrians marked as available,
encoding the posture characteristic of the pedestrian in the input image: score _ dose = Model _ dose (box, image),
clothing feature coding of pedestrians in input images: score _ address = Model _ address (box, image),
a parameter description; score _ position, score _ address is a vector output by the model, and represents the posture feature code and the decoration feature code of a certain pedestrian in the image respectively, specifically,
score _ pose [0 ];
score _ position [3 ] represents head visible;
score _ position [6 ];
score _ address [0 ] represents the wearing condition of the cap of the pedestrian dress;
score _ address [3 ].
And step six, further coding the labeling result to obtain a specific characteristic dimension.
The coding rule is as follows, wherein the coding characteristic for the pedestrian position information is as follows:
description of the parameters:representing position information characteristic codes, wherein w represents the column width of the pedestrian marks; h represents the pedestrian labeled row height.
Histograms on the pedestrian detection result region 3 channels (B, G, R) are calculated, respectively:
selecting a 16-dimensional histogram, wherein the gray level interval of each dimension is 256/16 = 16, and then counting the gray level of the whole area to obtain 16-dimensional features;
normalization: dividing the obtained 16-dimensional features by the total number of pixel points of the whole region
According to the class probability, a one-hot coding mechanism is adopted, namely only one dimension is 1, and the rest positions are 0, so that a 3-dimensional vector is formed:
feature_loc_bbx[49-51] = [0,1,0]
the feature coding rule aiming at the pedestrian attitude information adopts a one-hot coding mechanism, the total is 9 dimensions, as follows
Feature_pose_bbx[52-60] = [0,1,0....]
Aiming at the feature coding rule of the pedestrian clothing information, the feature coding rule totals 6-dimensional features, and comprises the following steps:
feature_property_bbx[61-66] = [0,1,0,...]
therefore, the coded feature dimension is 1+, 48+, 3+, 9+6= 67.
Step seven, cascading is carried out by utilizing the specific feature dimensions extracted in the step six, and the marked pedestrian attitude feature codes and the marked pedestrian clothing feature codes, training a 3-class SVM classifier according to the marked labels (0/1/2), and aiming at the 3 classes of labels, 3 x 2/2 classifiers need to be trained in a one-vs-one mode and respectively correspond to (SVM 1/0 vs 1), (SVM 2/0 vs 2) and (SVM 3/1 vs 2);
the feature dimension input by the classifier is the feature processed by the encoder, the total number of the feature dimension is 67, and the final predicted alarm result is output.
And step eight, recording the model parameters on the test equipment, setting a threshold value, and verifying the alarm effect given by the algorithm through the test data.
The specific steps of the test phase are as follows:
step one, acquiring a camera video stream, performing frame extraction on the video, and acquiring a scene input image to be detected.
And step two, initializing a network structure, initializing a test program running environment and initializing a weight parameter stored in training.
Inputting the image into a Model _ pede to obtain a pedestrian position feature code, inputting the pedestrian position feature code and the scene input image into a Model _ Pose and a Model _ address to obtain a pedestrian posture feature code and a pedestrian decoration feature code, calculating codes by adopting a mode in the training step six, and adopting a one-hot coding mechanism:
description of the parameters:a model of the position of the pedestrian is represented,a model of the pose of the pedestrian is represented,representing a pedestrian apparel model.
And step four, cascading the features together and sending the features into an SVM classifier to obtain alarm information, wherein the final prediction result can be shown in fig. 2.
The invention relates to a dressing specification detection method based on multi-feature cascade. The invention has the advantages that: by decomposing the complex problem of the dressing standard and encoding the pedestrian into specific multidimensional characteristics, the requirement of single model prediction on the complexity of training samples and algorithm models is avoided, and the accuracy and robustness of output alarm are improved.
1. The difficulty degree of marking the sample is reduced, the requirement degree of the marking data sample coverage area is reduced, the clothing attribute does not need to be marked aiming at the shielding of workers, and the complex marking of key points and the like is avoided.
2. The model output has certain interpretability, so that the model output has higher reliability, is no longer a black box, and has certain physical significance in characteristic output dimension.
3. The alarm given by the model is more accurate, the false alarm information is less under the conditions of dense personnel and environment shielding, and the given alarm is more accurate.
The present invention also provides a detection apparatus comprising:
and the labeling module comprises a first unit and a second unit, the first unit is used for crawling data and acquiring image information of a real scene, and the second unit is used for labeling the acquired image information as corresponding format data.
And the storage module is used for storing the labeled data samples and dividing the data samples into training data and testing data.
And the acquisition module is used for encoding the input image by using the method and converting the pedestrian position information, the pedestrian posture information and the pedestrian clothing information in the image into the characteristic vector.
And the confirming module is used for finally judging whether to give out the alarm information.
And the building module is used for building an algorithm model network based on the weight of the data training.
While there have been shown and described what are at present considered to be the basic principles and essential features of the invention and advantages thereof, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, but is capable of other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.
Claims (7)
1. A method for detecting dressing standard of workers in bright kitchen range scene is characterized by comprising the following steps: comprising a training phase comprising the steps of:
s1, acquiring an input image through a camera;
s2, identifying a target pedestrian frame in the input image, and acquiring pedestrian position information, pedestrian posture information and pedestrian clothing information of the target pedestrian frame, wherein the pedestrian position information comprises position point data, a shielding condition and an intensive condition, the pedestrian posture information comprises a body position condition, a head visible condition and an orientation condition between a human body and a camera, and the pedestrian clothing information comprises a cap wearing condition and an apron wearing condition;
s3, marking the pedestrian position information, the pedestrian posture information and the pedestrian clothing information of the obtained target pedestrian frame to form a data set, marking all normative persons of the information as dressing standards, marking any abnormal person of the information as dressing abnormal and sending out warning information, and marking any uncertain person of the information as being incapable of judging;
the sheltered condition comprises being sheltered or sheltered, the dense condition comprises not being dense and being dense, the posture condition comprises sitting posture, standing posture and the like, the head visible condition comprises head visible, head invisible and the like, the orientation condition between the human body and the camera comprises front facing camera, back facing camera and the like, the cap wearing condition comprises wearing cap, not wearing cap and the like, and the apron wearing condition comprises wearing apron, not wearing apron and the like;
the training phase further comprises the steps of:
s4, training a pedestrian position model, a pedestrian posture model and a pedestrian clothing model according to the data set obtained in the step S2 and the neural network training model parameters, fitting pedestrians through the models, and representing the position feature codes, the posture feature codes and the decoration feature codes of the pedestrians;
s5, further coding the result represented in the step S4 to obtain a specific characteristic dimension;
and S6, inputting the characteristic dimension obtained in the step S5 into a classifier to obtain a predicted final alarm result.
2. The method for detecting dressing specification of workers in bright kitchen light oven scene according to claim 1, characterized in that: the training phase further comprises the steps of:
and S7, recording the model parameters on the test equipment, setting a threshold value, and giving an alarm effect through a test data verification algorithm.
3. The method for detecting dressing specification of workers in bright kitchen light range scene according to claim 2, characterized in that: further comprising a testing phase comprising the steps of:
s9, acquiring a scene input image through a camera;
s10, initializing a network structure, initializing a test program running environment, and initializing weight parameters stored in training;
s11, inputting the scene input image into a pedestrian position model to obtain a pedestrian position characteristic code, inputting the pedestrian position characteristic code into a pedestrian posture model to obtain a pedestrian posture characteristic code, inputting the pedestrian posture characteristic code into a pedestrian clothing model to obtain a pedestrian clothing characteristic code, and further coding by adopting the step S5 to obtain a specific characteristic dimension;
and S12, inputting the characteristic dimension obtained in the step S11 into a classifier to obtain alarm information.
4. The method for detecting dressing specification of workers in bright kitchen light oven scene according to claim 1, characterized in that: the pedestrian position information is detected by adopting a target detection algorithm network YOLOX, the pedestrian posture information and the pedestrian clothing information are detected by adopting a classification network structure SWIN Transformer, and the classifier is a three-classification SVM classifier.
5. The method for detecting dressing specification of workers in bright kitchen light range scene according to claim 1, characterized in that: capturing the video stream of the camera by using a crawler technology, and performing frame extraction on the video to obtain an input image; or directly acquiring the image shot by the camera so as to acquire the input image.
6. A detection device is applied to the detection method for dressing specification of workers in bright kitchen lighting stove scene as claimed in claim 3, and is characterized in that: the method comprises the following steps:
the labeling module comprises a first unit and a second unit, the first unit is used for crawling data and acquiring image information of a real scene, and the second unit is used for labeling the acquired image information as corresponding format data;
the storage module is used for storing the labeled data samples and dividing the data samples into training data and testing data;
the acquisition module is used for encoding the input image by using the method and converting the pedestrian position information, the pedestrian posture information and the pedestrian clothing information in the image into a characteristic vector; and
and the confirming module is used for finally judging whether to give out the alarm information.
7. A testing device according to claim 6, wherein: the system further comprises a construction module, wherein the construction module is used for constructing the algorithm model network based on the weight trained by the data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210673405.6A CN114783000B (en) | 2022-06-15 | 2022-06-15 | Method and device for detecting dressing standard of worker in bright kitchen range scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210673405.6A CN114783000B (en) | 2022-06-15 | 2022-06-15 | Method and device for detecting dressing standard of worker in bright kitchen range scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114783000A CN114783000A (en) | 2022-07-22 |
CN114783000B true CN114783000B (en) | 2022-10-18 |
Family
ID=82421138
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210673405.6A Active CN114783000B (en) | 2022-06-15 | 2022-06-15 | Method and device for detecting dressing standard of worker in bright kitchen range scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114783000B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110472574A (en) * | 2019-08-15 | 2019-11-19 | 北京文安智能技术股份有限公司 | A kind of nonstandard method, apparatus of detection dressing and system |
CN111881705A (en) * | 2019-09-29 | 2020-11-03 | 深圳数字生命研究院 | Data processing, training and recognition method, device and storage medium |
CN112560656A (en) * | 2020-12-11 | 2021-03-26 | 成都东方天呈智能科技有限公司 | Pedestrian multi-target tracking method combining attention machine system and end-to-end training |
CN112560759A (en) * | 2020-12-24 | 2021-03-26 | 中再云图技术有限公司 | Bright kitchen range standard detection and identification method based on artificial intelligence, storage device and server |
CN113553979A (en) * | 2021-07-30 | 2021-10-26 | 国电汉川发电有限公司 | Safety clothing detection method and system based on improved YOLO V5 |
CN114227720A (en) * | 2022-01-10 | 2022-03-25 | 中山市火炬科学技术学校 | Vision identification cruise monitoring robot for kitchen epidemic prevention |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11651609B2 (en) * | 2020-06-10 | 2023-05-16 | Here Global B.V. | Method, apparatus, and system for mapping based on a detected pedestrian type |
CN112966618B (en) * | 2021-03-11 | 2024-02-09 | 京东科技信息技术有限公司 | Dressing recognition method, apparatus, device and computer readable medium |
-
2022
- 2022-06-15 CN CN202210673405.6A patent/CN114783000B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110472574A (en) * | 2019-08-15 | 2019-11-19 | 北京文安智能技术股份有限公司 | A kind of nonstandard method, apparatus of detection dressing and system |
CN111881705A (en) * | 2019-09-29 | 2020-11-03 | 深圳数字生命研究院 | Data processing, training and recognition method, device and storage medium |
CN112560656A (en) * | 2020-12-11 | 2021-03-26 | 成都东方天呈智能科技有限公司 | Pedestrian multi-target tracking method combining attention machine system and end-to-end training |
CN112560759A (en) * | 2020-12-24 | 2021-03-26 | 中再云图技术有限公司 | Bright kitchen range standard detection and identification method based on artificial intelligence, storage device and server |
CN113553979A (en) * | 2021-07-30 | 2021-10-26 | 国电汉川发电有限公司 | Safety clothing detection method and system based on improved YOLO V5 |
CN114227720A (en) * | 2022-01-10 | 2022-03-25 | 中山市火炬科学技术学校 | Vision identification cruise monitoring robot for kitchen epidemic prevention |
Non-Patent Citations (4)
Title |
---|
Personnel dress code detection algorithm based on convolutional neural network cascade;Zhang Na等;《2020 2nd International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI)》;20210226;215-221 * |
基于图像识别的工作人员穿戴规范性检测技术研究;袁一丹;《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅰ辑》;20200115(第01期);B026-10 * |
基于深度学习的污染场地作业人员着装规范性检测;刘欣宜等;《中国安全生产科学技术》;20200731(第07期);169-175 * |
改进YOLOv5的油田作业现场安全着装小目标检测;田枫等;《计算机系统应用》;20220314;第31卷(第3期);159-168 * |
Also Published As
Publication number | Publication date |
---|---|
CN114783000A (en) | 2022-07-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111414887B (en) | Secondary detection mask face recognition method based on YOLOV3 algorithm | |
CN111967393B (en) | Safety helmet wearing detection method based on improved YOLOv4 | |
US9652863B2 (en) | Multi-mode video event indexing | |
CN111814638B (en) | Security scene flame detection method based on deep learning | |
CN107437318B (en) | Visible light intelligent recognition algorithm | |
CN110414400B (en) | Automatic detection method and system for wearing of safety helmet on construction site | |
CN111401311A (en) | High-altitude parabolic recognition method based on image detection | |
CN113516076A (en) | Improved lightweight YOLO v4 safety protection detection method based on attention mechanism | |
CN112183472A (en) | Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet | |
CN112487891B (en) | Visual intelligent dynamic identification model construction method applied to electric power operation site | |
CN112906481A (en) | Method for realizing forest fire detection based on unmanned aerial vehicle | |
CN113989858B (en) | Work clothes identification method and system | |
CN110096945B (en) | Indoor monitoring video key frame real-time extraction method based on machine learning | |
CN112184773A (en) | Helmet wearing detection method and system based on deep learning | |
CN112270253A (en) | High-altitude parabolic detection method and device | |
CN113436229A (en) | Multi-target cross-camera pedestrian trajectory path generation method | |
CN114885119A (en) | Intelligent monitoring alarm system and method based on computer vision | |
CN112989958A (en) | Helmet wearing identification method based on YOLOv4 and significance detection | |
CN114997279A (en) | Construction worker dangerous area intrusion detection method based on improved Yolov5 model | |
CN115359406A (en) | Post office scene figure interaction behavior recognition method and system | |
CN117475353A (en) | Video-based abnormal smoke identification method and system | |
Rother et al. | What can casual walkers tell us about a 3D scene? | |
CN112633179A (en) | Farmer market aisle object occupying channel detection method based on video analysis | |
CN116682162A (en) | Robot detection algorithm based on real-time video stream | |
CN116189089B (en) | Intelligent video monitoring method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |