CN113392715A - Chef cap wearing detection method - Google Patents

Chef cap wearing detection method Download PDF

Info

Publication number
CN113392715A
CN113392715A CN202110555898.9A CN202110555898A CN113392715A CN 113392715 A CN113392715 A CN 113392715A CN 202110555898 A CN202110555898 A CN 202110555898A CN 113392715 A CN113392715 A CN 113392715A
Authority
CN
China
Prior art keywords
chef
cap
chef cap
video data
detection method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110555898.9A
Other languages
Chinese (zh)
Inventor
陈志�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Keshen Information Technology Co ltd
Original Assignee
Shanghai Keshen Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Keshen Information Technology Co ltd filed Critical Shanghai Keshen Information Technology Co ltd
Priority to CN202110555898.9A priority Critical patent/CN113392715A/en
Publication of CN113392715A publication Critical patent/CN113392715A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, and particularly discloses a chef cap wearing detection method, which comprises the steps of obtaining video data shot by a camera device; extracting human body characteristic information on the video data; extracting corresponding human head region information according to the human body characteristic information; inputting the information of the human head area into a chef cap detection model; whether the chef cap of the kitchen staff is worn or not can be detected in real time based on the detection result output by the chef cap detection model received by the terminal equipment, and the detection is more timely.

Description

Chef cap wearing detection method
Technical Field
The invention relates to the technical field of image processing, in particular to a chef cap wearing detection method.
Background
In recent years, the sanitary problem of the kitchen in the catering industry is more and more emphasized by the countries and the society, and the kitchen in the better catering industry is basically transparent, so that customers can directly observe the working state of the kitchen, but sometimes the kitchen cannot be modified due to the space relationship, and how to ensure the sanitary problem of the kitchen for the customers is achieved.
Whether the chef cap is worn or not is one of important conditions for guaranteeing kitchen sanitation, and the chef cap of a person who cooks is usually checked to be worn or not by manpower at present, but the manual check is not timely.
Disclosure of Invention
The invention aims to provide a chef cap wearing detection method, and aims to solve the technical problem that whether a chef cap of a person in the kitchen is worn and not checked timely by a person in the prior art.
In order to achieve the purpose, the chef cap wearing detection method adopted by the invention comprises the following steps:
acquiring video data shot by a camera device;
extracting human body characteristic information on the video data;
extracting corresponding human head region information according to the human body characteristic information;
inputting the human head area information into a chef cap detection model;
receiving a detection result output by the chef cap detection model based on terminal equipment;
if the situation that the kitchen staff do not wear the chef cap is detected, sending image data corresponding to the situation that the kitchen staff do not wear the chef cap to a manager;
and if the situation that all the kitchen staff wear the chef cap is detected, the image data is not sent to the manager.
Wherein, the video data that the video camera shooting device shoots of acquireing includes:
mounting the camera device to a corresponding corner of a kitchen;
carrying out shooting debugging on the shooting device;
and after debugging is finished, starting the camera device to acquire the shot video data.
The video data is an image video or a photo.
Wherein, extracting the human body feature information on the video data comprises:
decoding the video data according to an artificial intelligence algorithm to obtain decoded data;
and performing characteristic analysis on the decoded data according to the corresponding frame rate to obtain the human body characteristic information.
The human body characteristic information comprises personnel identity identification information, personnel post identification information and personnel gender identification information.
Wherein, the human head area information is the head area image of the corresponding kitchen staff.
Wherein before the step of inputting the human head region information into a chef cap detection model, the method further comprises:
training is carried out according to the original scene data set of the chef cap to obtain the chef cap detection model.
Wherein, in receiving the testing result of chef cap detection model output based on terminal equipment, if detect after the personnel of kitchen do not wear the chef cap, carry out result verification by oneself, specifically do:
receiving a real-time distance signal between a chef cap and a first obstacle on the ground, wherein the real-time distance signal is transmitted by the chef cap within a first preset time;
calculating the absolute value of the difference between the real-time distance signal and the height of the worker;
and judging whether the absolute value in the second preset time is changed or not, judging that the absolute value is within a preset threshold range, if the absolute value in the second preset time is changed and is within the preset threshold range, verifying that the detection result is wrong, and if the absolute value in a period of time is not changed or is not within the preset threshold range, verifying that the detection result is correct, and generating the detection result to the terminal equipment.
According to the chef cap wearing detection method, video data shot by the camera device are obtained;
extracting human body characteristic information on the video data; extracting corresponding human head region information according to the human body characteristic information; inputting the human head area information into a chef cap detection model; receiving a detection result output by the chef cap detection model based on terminal equipment; if the situation that the kitchen staff do not wear the chef cap is detected, sending image data corresponding to the situation that the kitchen staff do not wear the chef cap to a manager; if the kitchen staff is detected to wear the chef cap, the image data is not sent to the manager, so that manual investigation is not needed, whether the chef cap of the kitchen staff is worn or not can be detected in real time, and the inspection is more timely.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the steps of the chef hat donning detection method of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In the description of the present invention, it is to be understood that the terms "length", "width", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on the orientations or positional relationships illustrated in the drawings, and are used merely for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention. Further, in the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Referring to fig. 1, the present invention provides a chef hat wearing detection method, which includes the following steps:
s1: acquiring video data shot by a camera device;
s2: extracting human body characteristic information on the video data;
s3: extracting corresponding human head region information according to the human body characteristic information;
s4: inputting the human head area information into a chef cap detection model;
s5: receiving a detection result output by the chef cap detection model based on terminal equipment; if the situation that the kitchen staff do not wear the chef cap is detected, sending image data corresponding to the situation that the kitchen staff do not wear the chef cap to a manager; and if the situation that all the kitchen staff wear the chef cap is detected, the image data is not sent to the manager.
In the present embodiment, the imaging device is mounted to a corresponding corner of the kitchen; carrying out shooting debugging on the shooting device; and after debugging is finished, starting the camera device to acquire the shot video data. The video data is image video or photos. In addition, after the debugging is finished, the camera device is started, and the step of acquiring the shot video data is as follows: after the debugging is finished and the camera device is started, the camera device is trained based on libSVM, the human head is directly positioned, and all the human heads and positions in the image are detected.
The extracting the human body feature information on the video data comprises the following steps: decoding the video data according to an artificial intelligence algorithm to obtain decoded data; and performing characteristic analysis on the decoded data according to a corresponding frame rate to obtain the human body characteristic information, wherein the human body characteristic information comprises personnel identity identification information, personnel post identification information and personnel gender identification information.
Then extracting corresponding human head region information according to the human body characteristic information;
the human head area information is a head area image of a corresponding kitchen worker.
Inputting the human head area information into a chef cap detection model;
before the step of inputting the human head area information into the chef cap detection model, the method further comprises the following steps: training is carried out according to the original scene data set of the chef cap to obtain the chef cap detection model.
In the step of training according to an original chef hat wearing scene data set to obtain the chef hat detection model:
acquiring an original chef cap wearing scene data set, and performing enhancement processing on the original chef cap wearing scene data set;
training an original chef cap wearing scene data set and an enhanced chef cap wearing scene data set by utilizing neural networks of different feature extraction networks to obtain a plurality of first models;
acquiring an original chef cap unworn scene data set, and performing enhancement processing on the original chef cap unworn scene data set;
training the enhanced chef cap-free scene data set by taking a first model as a pre-training model to obtain a second model;
performing non-maximum suppression processing without distinguishing between a plurality of first models and a plurality of second models;
and fusing the plurality of first models and the second models after the non-maximum value inhibition processing to obtain a chef cap detection model, and carrying out chef cap wearing detection through the chef cap detection model.
After the plurality of first models and the plurality of second models which are subjected to the non-maximum suppression processing are fused, the fused second models are input into a YOLOv3 network for repeated training, and a chef cap detection model is obtained. Receiving a detection result output by the chef cap detection model based on terminal equipment; if the situation that the kitchen staff do not wear the chef cap is detected, sending image data corresponding to the situation that the kitchen staff do not wear the chef cap to a manager; and if the situation that all the kitchen staff wear the chef cap is detected, the image data is not sent to the manager. Wherein the terminal equipment is a computer;
in receiving the detection result output by the chef cap detection model based on the terminal equipment, if the situation that a cook person does not wear the chef cap is detected, the result verification is automatically carried out, and the method specifically comprises the following steps: receiving a real-time distance signal between a chef cap and a first obstacle on the ground, wherein the real-time distance signal is transmitted by the chef cap within a first preset time; calculating the absolute value of the difference between the real-time distance signal and the height of the worker; and judging whether the absolute value in the second preset time is changed or not, judging that the absolute value is within the preset threshold range, if the absolute value in the second preset time is changed and is within the preset threshold range, verifying that the detection result is wrong, and if the absolute value in a period of time is not changed or is not within the preset threshold range, verifying that the detection result is correct, and sending the detection result to the terminal equipment.
The chef cap wearing detection method provided by the invention comprises the steps of acquiring video data shot by a camera device; extracting human body characteristic information on the video data; extracting corresponding human head region information according to the human body characteristic information; inputting the human head area information into a chef cap detection model; receiving a detection result output by the chef cap detection model based on terminal equipment; if the situation that the kitchen staff do not wear the chef cap is detected, sending image data corresponding to the situation that the kitchen staff do not wear the chef cap to a manager; if the kitchen staff is detected to wear the chef cap, the image data is not sent to the manager, so that manual investigation is not needed, whether the chef cap of the kitchen staff is worn or not can be detected in real time, and the inspection is more timely.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A chef cap wearing detection method is characterized by comprising the following steps:
acquiring video data shot by a camera device;
extracting human body characteristic information on the video data;
extracting corresponding human head region information according to the human body characteristic information;
inputting the human head area information into a chef cap detection model;
receiving a detection result output by the chef cap detection model based on terminal equipment;
if the situation that the kitchen staff do not wear the chef cap is detected, sending image data corresponding to the situation that the kitchen staff do not wear the chef cap to a manager;
and if the situation that all the kitchen staff wear the chef cap is detected, the image data is not sent to the manager.
2. The chef cap wear detection method of claim 1, wherein obtaining video data captured by the camera comprises:
mounting the camera device to a corresponding corner of a kitchen;
carrying out shooting debugging on the shooting device;
and after debugging is finished, starting the camera device to acquire the shot video data.
3. The chef hat fit detection method of claim 2,
the video data is an image video or a photo.
4. The chef hat fit detection method of claim 1, wherein extracting the human body characteristic information on the video data comprises:
decoding the video data according to an artificial intelligence algorithm to obtain decoded data;
and performing characteristic analysis on the decoded data according to the corresponding frame rate to obtain the human body characteristic information.
5. The chef hat fit detection method of claim 1,
the human body characteristic information comprises personnel identity identification information, personnel post identification information and personnel gender identification information.
6. The chef hat fit detection method of claim 1,
the human head area information is a head area image of a corresponding kitchen worker.
7. The chef hat fit detection method of claim 1, further comprising, prior to the step of entering the human head area information into a chef hat detection model:
training is carried out according to the original scene data set of the chef cap to obtain the chef cap detection model.
8. The chef hat wearing detection method according to claim 1, wherein in the detection result outputted by the chef hat detection model received by the terminal device, if it is detected that the cook person does not wear the chef hat, the result verification is performed by himself/herself, specifically:
receiving a real-time distance signal between a chef cap and a first obstacle on the ground, wherein the real-time distance signal is transmitted by the chef cap within a first preset time;
calculating the absolute value of the difference between the real-time distance signal and the height of the worker;
and judging whether the absolute value in the second preset time is changed or not, judging that the absolute value is within a preset threshold range, if the absolute value in the second preset time is changed and is within the preset threshold range, verifying that the detection result is wrong, and if the absolute value in a period of time is not changed or is not within the preset threshold range, verifying that the detection result is correct, and generating the detection result to the terminal equipment.
CN202110555898.9A 2021-05-21 2021-05-21 Chef cap wearing detection method Pending CN113392715A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110555898.9A CN113392715A (en) 2021-05-21 2021-05-21 Chef cap wearing detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110555898.9A CN113392715A (en) 2021-05-21 2021-05-21 Chef cap wearing detection method

Publications (1)

Publication Number Publication Date
CN113392715A true CN113392715A (en) 2021-09-14

Family

ID=77618816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110555898.9A Pending CN113392715A (en) 2021-05-21 2021-05-21 Chef cap wearing detection method

Country Status (1)

Country Link
CN (1) CN113392715A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114227720A (en) * 2022-01-10 2022-03-25 中山市火炬科学技术学校 Vision identification cruise monitoring robot for kitchen epidemic prevention
CN114821476A (en) * 2022-05-05 2022-07-29 北京容联易通信息技术有限公司 Bright kitchen range intelligent monitoring method and system based on deep learning detection

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114227720A (en) * 2022-01-10 2022-03-25 中山市火炬科学技术学校 Vision identification cruise monitoring robot for kitchen epidemic prevention
CN114821476A (en) * 2022-05-05 2022-07-29 北京容联易通信息技术有限公司 Bright kitchen range intelligent monitoring method and system based on deep learning detection
CN114821476B (en) * 2022-05-05 2022-11-22 北京容联易通信息技术有限公司 Intelligent open kitchen bright stove monitoring method and system based on deep learning detection

Similar Documents

Publication Publication Date Title
CN108537256B (en) Method and device for identifying wearing of safety helmet
CN105160318B (en) Lie detecting method based on facial expression and system
CN113392715A (en) Chef cap wearing detection method
CN104287946B (en) Blind person's avoidance suggestion device and method
CN104036236B (en) A kind of face gender identification method based on multiparameter exponential weighting
CN106022209A (en) Distance estimation and processing method based on face detection and device based on face detection
CN106037651B (en) A kind of heart rate detection method and system
CN107958572B (en) Baby monitoring system
CN110837750B (en) Face quality evaluation method and device
JP6443842B2 (en) Face detection device, face detection system, and face detection method
CN102521578A (en) Method for detecting and identifying intrusion
KR20080018642A (en) Remote emergency monitoring system and method
CN107463887A (en) Train driver gesture intelligence inspection system and intelligent inspection method
CN114894337B (en) Temperature measurement method and device for outdoor face recognition
CN115227234B (en) Cardiopulmonary resuscitation pressing action assessment method and system based on camera
CN113642507A (en) Examination monitoring method, system, equipment and medium based on multi-camera one-person detection
CN212208377U (en) Binocular face recognition and temperature measurement class information display board
CN107704851B (en) Character identification method, public media display device, server and system
CN106791794A (en) A kind of display device, image processing method and device
CN111639582A (en) Living body detection method and apparatus
CN105354552A (en) Human face identification and expression analysis based online monitoring system and method
CN110910449A (en) Method and system for recognizing three-dimensional position of object
US11995840B2 (en) Anthropometric data portable acquisition device and method of collecting anthropometric data
CN114898443A (en) Face data acquisition method and device
CN109993033A (en) Method, system, server, equipment and the medium of video monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination