CN114821476A - Bright kitchen range intelligent monitoring method and system based on deep learning detection - Google Patents

Bright kitchen range intelligent monitoring method and system based on deep learning detection Download PDF

Info

Publication number
CN114821476A
CN114821476A CN202210480478.3A CN202210480478A CN114821476A CN 114821476 A CN114821476 A CN 114821476A CN 202210480478 A CN202210480478 A CN 202210480478A CN 114821476 A CN114821476 A CN 114821476A
Authority
CN
China
Prior art keywords
target
data
video data
determining
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210480478.3A
Other languages
Chinese (zh)
Other versions
CN114821476B (en
Inventor
刘佳宁
曾国卿
许志强
孙昌勋
朱新潮
李威
杨坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ronglian Yitong Information Technology Co ltd
Original Assignee
Beijing Ronglian Yitong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ronglian Yitong Information Technology Co ltd filed Critical Beijing Ronglian Yitong Information Technology Co ltd
Priority to CN202210480478.3A priority Critical patent/CN114821476B/en
Publication of CN114821476A publication Critical patent/CN114821476A/en
Application granted granted Critical
Publication of CN114821476B publication Critical patent/CN114821476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a bright kitchen range intelligent monitoring method and system based on deep learning detection, wherein the method comprises the following steps: acquiring video data based on a target field video stream, and performing data analysis on the video data to determine a neural network structure; designing a target neural network based on a neural network structure in a targeted manner, carrying out model training on video data based on the target neural network to generate a target model, and inputting the video data into the target model for monitoring; and displaying the monitoring result, and sending out real-time early warning and an early warning reason. Realize the control to video data through the target model to show and real-time early warning based on the control result, not only realized the condition of wearing to chef's clothes, chef's cap, the personnel play behaviors such as cell-phone, smoking, the garbage bin is indiscriminate put, the garbage bin does not cover, the rubbish overflows the accuracy and the intelligence of the control of unsanitary conditions such as full, rubbish is indiscriminate put, through real-time warning and definite early warning reason, be favorable to the condition in time master kitchen of customer.

Description

Intelligent open kitchen bright stove monitoring method and system based on deep learning detection
Technical Field
The invention relates to the technical field of intelligent kitchen monitoring, in particular to a bright kitchen range intelligent monitoring method and system based on deep learning detection.
Background
At present, the bright kitchen range improves the guarantee of food safety through panoramic display of key places and key links, and the invention provides intelligent monitoring and analysis on the wearing, behavior and sanitary conditions of kitchen personnel and realizes automatic real-time early warning.
However, in the monitoring technology at present, only the camera is used for monitoring, and interaction between a client and the monitoring is not realized, so that the monitoring efficiency is greatly reduced.
Disclosure of Invention
The invention provides an intelligent monitoring method and system for a bright kitchen range based on deep learning detection, which are used for monitoring video data through a target model, displaying and early warning in real time based on a monitoring result, and not only realize the monitoring accuracy and intelligence for unsanitary conditions such as wearing conditions of chef clothes and chef caps, people playing mobile phones, smoking and the like, and monitoring unsanitary conditions such as garbage cans being randomly placed, not being covered, full of garbage, randomly placing garbage and the like, but also are beneficial for clients to timely master the kitchen conditions through real-time alarming and determining the reason of early warning.
An intelligent monitoring method for a bright kitchen range based on deep learning target detection comprises the following steps:
step 1: acquiring video data based on a target field video stream, and simultaneously, performing data analysis on the video data to determine a neural network structure;
step 2: designing a target neural network based on the neural network structure in a targeted manner, performing model training on the video data based on the target neural network to generate a target model, and inputting the video data into the target model for monitoring;
and step 3: and displaying the monitoring result, and sending out real-time early warning and an early warning reason.
Preferably, in step 1, video data are obtained based on a target field video stream, and simultaneously, the video data are analyzed to determine a working process of a neural network structure, including:
reading the target site video stream, and collecting video data corresponding to the target site video stream based on a preset server;
extracting a data model related to the video data on the network according to a target program;
and cleaning the video data, analyzing the cleaned video data through the data model, and determining a neural network structure based on an analysis result.
Preferably, in step 2, the working process of performing model training on the video data based on the target neural network to generate a target model includes:
acquiring scene requirements of a target client;
determining a target neural network based on the video data and the scene requirements of the target client, and meanwhile, carrying out data annotation on the video data;
performing targeted modification and tuning on the target neural network based on the labeled video data, and performing model training on the target scene based on the tuned target neural network;
and generating the target model according to the training result.
Preferably, in step 2, the target model includes: and inputting video data corresponding to the target field video stream into the object detection model, the garbage can attribute classification model and the action prediction model, so as to realize intelligent monitoring on the bright kitchen range.
Preferably, the intelligent monitoring method for the bright kitchen range based on deep learning target detection comprises the following steps of:
reading the target site video stream, determining a video image corresponding to the current frame, and reading the area range of the human body in the video image corresponding to the current frame;
shearing the frame image according to the area range, and determining a target image based on a shearing result;
enhancing the human body characteristics in the target image, and simultaneously removing background interference of the target image to obtain a processed image;
reading the processed image, determining characteristic skeleton points of the human body, and determining a target person based on the characteristic skeleton points of the human body;
dynamically tracking the target personnel based on a monitoring device, and determining the activity frame information of the target personnel;
when the activity frame information of the target person collected by the monitoring device meets a preset frame number, inputting the characteristic skeleton points of the target person and the activity frame information into the action prediction model for action prediction;
determining a current action of the target person based on the prediction result.
Preferably, the intelligent monitoring method for the bright kitchen range based on deep learning target detection is used for intelligently monitoring the bright kitchen range, and further comprises the following steps:
inputting the video data into the object detection model for static detection and identification, and determining an identification result;
determining a static chef cap and a static chef uniform of a target site based on the recognition result, performing interactive calculation on a target recognition area according to the recognition result, and determining a target chef corresponding to the static chef cap and the static chef uniform based on the calculation result;
recording the target chef based on the activity area in the target live video stream, and determining a recorded image set according to the time frame sequence;
determining a misrecognized image in the recorded image set by using target characteristics according to a preset algorithm, and filtering the misrecognized image to determine a filtered image set;
on the basis of the filtering image set, sequentially carrying out similarity matching on the activity area of the target chef corresponding to the filtering image of the current frame and the activity area of the target chef corresponding to the filtering image of the previous frame;
and determining the wearing condition of the target cook on the cook clothes and the cook cap according to the similarity matching result.
Preferably, the intelligent monitoring method for the bright kitchen range based on deep learning target detection is used for intelligently monitoring the bright kitchen range, and further comprises the following steps:
the method comprises the steps of obtaining a video picture of a target site, dividing the video picture based on the client requirements of a target client, and determining a garbage can area and a sensitive area;
respectively carrying out region detection on the garbage can region and the sensitive region, wherein the specific process comprises the following steps:
identifying trash cans in the trash can area based on the trash can attribute classification model, and determining the proceeding attributes of the trash cans;
wherein the proceeding attribute of the trash can comprises: the garbage can is full, uncovered and covered;
recognizing the garbage based on the object detection model, interacting with the sensitive area based on a recognition result, and generating an interaction result, wherein the interaction result comprises: refuse is in the sensitive area and refuse is not in the sensitive area.
Preferably, in step 2, after a target model is generated and before the video data is input into the target model for monitoring, the method for intelligently monitoring the bright kitchen range based on deep learning target detection further includes:
reading a video detection point, determining the detection characteristics of the video detection point, and determining a classification label of the video data according to the detection characteristics;
clustering the video data based on the classification label, and determining a sub-video data block, wherein the sub-video data block comprises a plurality of video data with the same detection characteristic;
determining sample detection data based on the video detection points, and inputting the sample detection data serving as a check code into the sub-video data block for data check;
determining video data which does not accord with the check code in the sub video data block according to a check result, and taking the video data which does not accord with the check code as irrelevant data;
packing the data of the irrelevant data to generate an irrelevant data set;
determining the data attribute of the irrelevant data in the irrelevant data set, matching the data attribute with the detection characteristics of the video detection point, and judging whether the irrelevant data in the irrelevant data set has effective data or not;
when the data attribute is matched with the detection feature of the video detection point, judging that effective data exists in the unrelated data set, and marking the effective data in the unrelated data set;
extracting the effective data based on the marking result, storing the effective data in a sub-video data block consistent with the detection characteristics of the video detection points, cleaning the residual ineffective data in the irrelevant data set, and generating an accurate sub-video data block based on the cleaning result;
otherwise, judging that no effective data exists in the irrelevant data set, and removing the irrelevant data set.
Preferably, in step 3, the intelligent monitoring method for the bright kitchen range based on deep learning target detection displays the monitoring result and sends out real-time early warning and early warning reasons, and comprises the following steps:
acquiring a monitoring result and determining data attribute information of the monitoring result;
matching the data attribute information with compression modes in a preset compression mode library, determining a target compression mode corresponding to the monitoring result, and calling a corresponding target compression algorithm based on the target compression mode to compress the monitoring result to obtain a target compression packet, wherein one compression mode corresponds to one compression algorithm;
constructing a transmission link, transmitting the target compression packet to a preset display terminal based on the transmission link, and decompressing the target compression packet by the display terminal;
classifying and displaying the monitoring results based on the decompression processing result, and determining environmental information and time information contained in each type of monitoring results based on the display result;
determining whether illegal or inelegant behaviors exist in each type of monitoring result based on the environment information and the time information;
when the illegal or inelegant behavior exists, determining the type of the illegal or inelegant behavior, determining a target alarm model based on the type, and alarming based on the target alarm model;
otherwise, judging that the interior of the kitchen has no illegal or inelegant behaviors, and continuously monitoring the kitchen in real time.
The utility model provides a bright kitchen intelligent monitoring system of kitchen based on deep learning target detection, includes:
the data acquisition module is used for acquiring video data based on a target field video stream and simultaneously performing data analysis on the video data to determine a neural network structure;
the monitoring module is used for designing a target neural network based on the neural network structure in a targeted manner, performing model training on the video data based on the target neural network to generate a target model, and inputting the video data into the target model for monitoring;
and the early warning module is used for displaying the monitoring result and sending out real-time early warning and early warning reasons.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flowchart of an intelligent monitoring method for a bright kitchen range based on deep learning detection in the embodiment of the present invention;
FIG. 2 is a flowchart of step 1 in an intelligent monitoring method for a bright kitchen range based on deep learning detection according to an embodiment of the present invention;
fig. 3 is a structural diagram of an intelligent monitoring system for a bright kitchen range based on deep learning detection in the embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1:
the embodiment provides a bright kitchen range intelligent monitoring method based on deep learning target detection, as shown in fig. 1, including:
step 1: acquiring video data based on a target field video stream, and simultaneously performing data analysis on the video data;
step 2: designing a target neural network based on a data analysis result in a targeted manner, performing model training on the video data based on the target neural network to generate a target model, and inputting the video data into the target model for monitoring;
and step 3: and displaying the monitoring result, and sending out real-time early warning and an early warning reason.
In this embodiment, the video data may be in the kitchen, such as chef behavior, how the chef hat and chef clothes are worn, and what the trash can is doing, three attributes: the garbage can is full, the garbage can is uncovered, the garbage can is covered, the position of the garbage is reserved and the like.
In this embodiment, the object model includes: the object detection model, the garbage can attribute classification model and the action prediction model are used for inputting video data corresponding to the target field video stream into the object detection model, the garbage can attribute classification model and the action prediction model, so that intelligent monitoring on the bright kitchen light range is realized.
In this embodiment, the video data is subjected to data analysis, and the analysis includes category statistics, information of image size occupied by the target, calculation of an image mean and variance, and the like.
In this embodiment, the neural network structure may be a structure that divides the data into three layers of the neural network, an input layer, an output layer, and an intermediate layer, according to the analysis result of the video data.
In this embodiment, the target neural network may be a classical convolutional neural network and a graph neural network.
In this embodiment, the real-time warning and the reason for the warning may be: the method is designed by self according to the requirements of customers, and the current means mainly comprises the following steps: whether to compare the previous frame with the next frame or not, support the setting of a user-defined area, support the setting of the hit times (the continuous alarm times of a certain target object) and adjust the prediction score threshold. The early warning information is guaranteed to be real and effective to the maximum extent through combination and superposition of multiple means. The combination of judging whether the front frame and the rear frame of the detected target move or not by using the front frame and the rear frame and setting a user-defined area to shield a scene with a fuzzy target at a longer distance effectively reduces the misinformation of smoking and calling algorithms. The low-score threshold value is set, so that the recall rate of the algorithm worn by people such as chef clothes, chef caps and the like can be effectively improved. The continuous alarming times of the garbage can be set, so that repeated and useless alarming can be greatly reduced.
The beneficial effects of the above technical scheme are: realize the control to video data through the target model to show and real-time early warning based on the control result, not only realized the condition of wearing to chef's clothes, chef's cap, the personnel play behaviors such as cell-phone, smoking, the garbage bin is indiscriminate put, the garbage bin does not cover, the rubbish overflows the accuracy and the intelligence of the control of unsanitary conditions such as full, rubbish is indiscriminate put, through real-time warning and definite early warning reason, be favorable to the condition in time master kitchen of customer.
Example 2:
on the basis of embodiment 1, this embodiment provides an intelligent monitoring method for bright kitchen light based on deep learning target detection, as shown in fig. 2, in step 1, video data is obtained based on a target live video stream, and meanwhile, a working process of analyzing and determining a neural network structure for the video data includes:
s101: reading the target site video stream, and collecting video data corresponding to the target site video stream based on a preset server;
s102: extracting a data model related to the video data on the network according to a target program;
s103: and cleaning the video data, analyzing the cleaned video data through the data model, and determining a neural network structure based on an analysis result.
In this embodiment, the preset server may be a server of the system itself, and may collect video data corresponding to the target live video stream.
In this embodiment, the target program may be a crawler program.
In this embodiment, the video data-related data model may be a data model detected in crawling kitchens consistent with the video data, such as: a data model of kitchen hygiene, a data model of performance attributes of the trash can, a data model of chef's behavior, etc.
The beneficial effects of the above technical scheme are: the accurate collection of data is realized, and a data model is crawled, so that the video data can be accurately analyzed, the video detection points can be accurately determined, and the accuracy of monitoring the kitchen is indirectly improved.
Example 3:
on the basis of embodiment 1, this embodiment provides an intelligent monitoring method for bright kitchen ranges based on deep learning target detection, and in step 2, a working process of performing model training on the video data based on the target neural network to generate a target model includes:
acquiring scene requirements of a target client;
determining a target neural network based on the video data and the scene requirements of the target client, and meanwhile, carrying out data annotation on the video data;
performing targeted modification and tuning on the target neural network based on the labeled video data, and performing model training on the target scene based on the tuned target neural network;
and generating the target model according to the training result.
In this embodiment, the scenario requirements of the target customer may be based on the location of the target kitchen furniture and aisle, and the shape of the kitchen chef's clothing, etc.
In the embodiment, a large amount of data is analyzed and customer scene requirements are combined, a classical convolutional neural network and a graph neural network are reasonably selected as baseline versions, the networks are subjected to targeted modification and optimization by combining labeled data, and model training under the scene is completed. The training model comprises an object detection model, a garbage can attribute classification model and an action prediction model.
The beneficial effects of the above technical scheme are: the target neural network can be accurately determined according to the scene requirements of the target client, and meanwhile, the target neural network can be favorably modified and optimized, so that the obtained target model has detection pertinence, and the accuracy of obtaining the target model is greatly improved.
Example 4:
on the basis of embodiment 1, this embodiment provides an intelligent monitoring method for bright kitchen light oven based on deep learning target detection, and in step 2, the target model includes: and inputting video data corresponding to the target field video stream into the object detection model, the garbage can attribute classification model and the action prediction model, so as to realize intelligent monitoring on the bright kitchen range.
The beneficial effects of the above technical scheme are: by determining the type of the target model, the monitoring of the bright kitchen range is more targeted, the analysis of video data is more reasonable, and the intelligence of the monitoring of the bright kitchen range is greatly improved.
Example 5:
on the basis of embodiment 4, this embodiment provides a bright kitchen bright stove intelligent monitoring method based on deep learning target detection, and the process of intelligently monitoring the bright kitchen bright stove includes:
reading the target site video stream, determining a video image corresponding to the current frame, and reading the area range of the human body in the video image corresponding to the current frame;
shearing the frame image according to the area range, and determining a target image based on a shearing result;
enhancing the human body characteristics in the target image, and simultaneously removing background interference of the target image to obtain a processed image;
reading the processed image, determining characteristic skeleton points of the human body, and determining a target person based on the characteristic skeleton points of the human body;
dynamically tracking the target personnel based on a monitoring device, and determining activity frame information of the target personnel;
when the activity frame information of the target person collected by the monitoring device meets a preset frame number, inputting the characteristic skeleton points of the target person and the activity frame information into the action prediction model for action prediction;
determining a current action of the target person based on the prediction result.
In this embodiment, the target image may be determined by cutting out an area of the video image where the human body is located.
In this embodiment, the enhancement processing may be signal enhancement using two-dimensional fourier transform such that human features in the target image are enhanced.
In this embodiment, the human body feature may be a facial feature, a contour feature, or the like of the human body.
In this embodiment, the characteristic skeletal points may be human contour skeletal features.
In this embodiment, the active frame information may be frame image information corresponding to a video frame when the target person moves in the kitchen.
In this embodiment, the detected human skeleton points are analyzed by using the general skeleton point prediction network, and the case where the chef uniform and the chef hat are blocked in the screen is eliminated.
In this embodiment, the preset number of frames may be 30 frames.
In the embodiment, the general target detection network is used for detecting the personnel, the personnel id and the position are tracked and recorded in real time according to the tracking algorithm, and the personnel information is sent to the general skeleton point prediction network to predict the skeleton point of the personnel. When the tracked personnel information collection meets 30 frames, the information of the personnel skeleton points and the video frames is sent into the trained graph neural network. The current action of the person is predicted through the space and time information of the person.
The beneficial effects of the above technical scheme are: by determining the characteristic skeleton points of the human body, the method is favorable for determining the target personnel, further performing dynamic tracking on the target personnel, and acquiring the activity frame information of the target personnel, so that the method is favorable for acquiring the current action of the target personnel through the action prediction model, and realizes accurate monitoring of the action of the target personnel.
Example 6:
on the basis of embodiment 4, this embodiment provides a bright kitchen bright stove intelligent monitoring method based on deep learning target detection, and the process of intelligently monitoring the bright kitchen bright stove further includes:
inputting the video data into the object detection model for static detection and identification, and determining an identification result;
determining a static chef cap and a static chef uniform of a target site based on the recognition result, performing interactive calculation on a target recognition area according to the recognition result, and determining a target chef corresponding to the static chef cap and the static chef uniform based on the calculation result;
recording the target chef based on the activity area in the target live video stream, and determining a recorded image set according to the time frame sequence;
determining a misrecognized image in the recorded image set by using target characteristics according to a preset algorithm, and filtering the misrecognized image to determine a filtered image set;
on the basis of the filtering image set, sequentially carrying out similarity matching on the activity area of the target chef corresponding to the filtering image of the current frame and the activity area of the target chef corresponding to the filtering image of the previous frame;
and determining the wearing condition of the target cook on the cook clothes and the cook cap according to the similarity matching result.
In this embodiment, the static detection identification may be an object that does not move in the kitchen, such as a chef's clothing, chef's hat, etc. placed on a table.
In this embodiment, the static chef uniform may be an unworn chef uniform and the static chef hat may be an unworn chef hat.
In this embodiment, the temporal frame order may be video frames determined in chronological order.
In this embodiment, the preset and algorithm may be an image region comparison algorithm.
In this embodiment, the target feature may be a characteristic of the human body that is constantly moving.
In this embodiment, the target recognition area may be an area where the real area intersects with the prediction area.
In this embodiment, the recognition result is interactively calculated by using the recognition area iou (the area where the real area and the prediction area intersect), the attribute of the chef hat which is not worn by the chef is assigned to a specific person, and the area of the person in the picture is recorded. Through image area comparison algorithm, the characteristic that the human body is constantly moving is utilized to realize filtering of some misrecognition, similarity matching is realized through comparing the area recorded in the last frame by utilizing continuous repeated recognition, and the wearing condition of the chef clothes and the chef hat of the person is finally judged through repeated statistical analysis.
The beneficial effects of the above technical scheme are: the target cooks corresponding to the worn cooks clothing and the cooks caps (namely the target cooks corresponding to the static cooks clothing and the static cooks clothing) are determined, so that the activity areas of the target cooks are recorded, the recorded image sets are determined according to the time frame sequence, the image similarity is matched, the wearing conditions of the target cooks to the cooks clothing and the cooks caps are accurately determined according to the matching results, the objectivity and the accuracy of monitoring the target cooks are improved, and the monitoring of bright cooks is more intelligent.
Example 7:
on the basis of embodiment 4, this embodiment provides a bright kitchen bright stove intelligent monitoring method based on deep learning target detection, and the process of intelligently monitoring the bright kitchen bright stove further includes:
the method comprises the steps of obtaining a video picture of a target site, dividing the video picture based on the client requirements of a target client, and determining a garbage can area and a sensitive area;
respectively carrying out region detection on the garbage can region and the sensitive region, wherein the specific process comprises the following steps:
identifying trash cans in the trash can area based on the trash can attribute classification model, and determining the proceeding attributes of the trash cans;
wherein the proceeding attribute of the trash can comprises: the garbage can is full, uncovered and covered;
recognizing the garbage based on the object detection model, interacting with the sensitive area based on a recognition result, and generating an interaction result, wherein the interaction result comprises: refuse is in the sensitive area and refuse is not in the sensitive area.
In this embodiment, the trash can area can be the area in the kitchen where trash cans are placed.
In this embodiment, the sensitive area may be a public area, and an area where no garbage exists may be the sensitive area.
In this embodiment, a client can divide areas in a video picture, perform key supervision on the condition that a trash can and trash should not exist in some areas, recognize the trash can and the trash through a trained object detection model, interact with a roi (region of interest) drawn by the client, and judge whether a target appears in the area where the target does not appear. And identifying the property of the garbage can. The garbage can is identified by using the object detection model, the identified garbage can is sent into the garbage can attribute classification model to predict the attributes of the identified garbage can, and the three attributes are total: the garbage can is covered by the cover and the garbage can is covered by the cover.
The beneficial effects of the above technical scheme are: the attribute of the garbage can is accurately determined through the garbage can attribute classification model, and the garbage can be effectively identified through the object detection model, so that the garbage can be monitored through interaction of an identification result and a sensitive area, and the comprehensiveness and accuracy of monitoring on a bright kitchen range are greatly improved.
Example 8:
on the basis of embodiment 1, this embodiment provides an intelligent monitoring method for bright kitchen ranges based on deep learning target detection, and in step 2, after generating a target model and before inputting the video data into the target model for monitoring, the method further includes:
reading a video detection point, determining detection characteristics of the video detection point, and determining a classification label of the video data according to the detection characteristics;
clustering the video data based on the classification label, and determining a sub-video data block, wherein the sub-video data block comprises a plurality of video data with the same detection characteristic;
determining sample detection data based on the video detection points, and inputting the sample detection data serving as a check code into the sub-video data block for data check;
determining video data which does not accord with the check code in the sub video data block according to a check result, and taking the video data which does not accord with the check code as irrelevant data;
packing the data of the irrelevant data to generate an irrelevant data set;
determining the data attribute of the irrelevant data in the irrelevant data set, matching the data attribute with the detection characteristics of the video detection point, and judging whether the irrelevant data in the irrelevant data set has effective data or not;
when the data attribute is matched with the detection feature of the video detection point, judging that effective data exists in the unrelated data set, and marking the effective data in the unrelated data set;
extracting the effective data based on the marking result, storing the effective data in a sub-video data block consistent with the detection characteristics of the video detection points, cleaning the residual ineffective data in the irrelevant data set, and generating an accurate sub-video data block based on the cleaning result;
otherwise, judging that no effective data exists in the irrelevant data set, and removing the irrelevant data set.
In this embodiment, the detection feature refers to a detection type corresponding to the video detection point, and may be, for example, a current state of a chef cap, a chef clothes, or a trash can.
In this embodiment, the classification tag refers to a tag that is determined for each type of video detection data according to the detection characteristics, and the current type of video data can be determined quickly and accurately by the tag.
In this embodiment, clustering refers to classifying videos of the same type, for example, video data of detected chef clothes and chef caps may be classified into one type.
In this embodiment, the sub-video data block refers to a data set obtained by classifying data of the same type.
In this embodiment, the sample detection data refers to standard data corresponding to the video detection point, that is, reference data corresponding to each detection feature.
In this embodiment, determining that video data in the sub-video data block that does not match the check code refers to checking whether detection data that does not match the type detection feature exists in the sub-video data block.
In this embodiment, the data attribute refers to a data type, a data amount, and the like of the unrelated data.
In this embodiment, the marking refers to marking the video data required by the singular detection feature in the unrelated data set by a preset data marking method, and the marking method may be a character marking or a line segment marking, etc.
In this embodiment, invalid data refers to video data that is not available for all detected features in the unrelated data set.
The beneficial effects of the above technical scheme are: the video data are accurately classified by determining the detection characteristics of the video data, abnormal data are eliminated by checking each type of video data, the accuracy of each type of video data is improved, and meanwhile, the rigor and the accuracy of analyzing and monitoring corresponding behavior actions according to each type of video data are improved.
Example 9:
on the basis of embodiment 1, this embodiment provides a bright kitchen range intelligent monitoring method based on deep learning target detection, and in step 3, the monitoring result is displayed, and real-time early warning and early warning reason are sent out, including:
acquiring a monitoring result and determining data attribute information of the monitoring result;
matching the data attribute information with compression modes in a preset compression mode library, determining a target compression mode corresponding to the monitoring result, and calling a corresponding target compression algorithm based on the target compression mode to compress the monitoring result to obtain a target compression packet, wherein one compression mode corresponds to one compression algorithm;
constructing a transmission link, transmitting the target compression packet to a preset display terminal based on the transmission link, and decompressing the target compression packet by the display terminal;
classifying and displaying the monitoring results based on the decompression processing result, and determining environmental information and time information contained in each type of monitoring results based on the display result;
determining whether illegal or inelegant behaviors exist in each type of monitoring result based on the environment information and the time information;
when the illegal or inelegant behavior exists, determining the type of the illegal or inelegant behavior, determining a target alarm model based on the type, and alarming based on the target alarm mode;
otherwise, judging that the interior of the kitchen has no illegal or inelegant behaviors, and continuously monitoring the kitchen in real time.
In this embodiment, the data attribute information refers to a data amount corresponding to the monitoring result and a data type corresponding to the monitoring result.
In this embodiment, the preset compression mode library is set in advance, and multiple data compression modes are stored inside the preset compression mode library.
In this embodiment, the target compression algorithm refers to a program or script or the like for executing compression of data.
In this embodiment, the preset display terminal is set in advance, and may be a liquid crystal display, for example.
In this embodiment, the environment information and the time information are used to determine whether the current environment is located inside the kitchen and whether the current monitoring time is working hours on duty.
The beneficial effects of the above technical scheme are: through compressing the transmission with the control result, realize carrying out effectual demonstration to the control result, whether accurately judge the kitchen inside according to the display result simultaneously and have the violation of rules and regulations or inelegant action, improved the degree of accuracy to the inside control dynamics in kitchen and control, be convenient for carry out good management to the kitchen inside.
Example 10:
the embodiment provides a bright kitchen light range intelligent monitoring system based on deep learning target detection, as shown in fig. 3, include:
the data acquisition module is used for acquiring video data based on a target field video stream and simultaneously performing data analysis on the video data to determine a neural network structure;
the monitoring module is used for designing a target neural network based on the neural network structure in a targeted manner, performing model training on the video data based on the target neural network to generate a target model, and inputting the video data into the target model for monitoring;
and the early warning module is used for displaying the monitoring result and sending out real-time early warning and early warning reasons.
The beneficial effects of the above technical scheme are: realize the control to video data through the target model to show and real-time early warning based on the control result, not only realized the condition of wearing to chef's clothes, chef's cap, the personnel play behaviors such as cell-phone, smoking, the garbage bin is indiscriminate put, the garbage bin does not cover, the rubbish overflows the accuracy and the intelligence of the control of unsanitary conditions such as full, rubbish is indiscriminate put, through real-time warning and definite early warning reason, be favorable to the condition in time master kitchen of customer.
Example 11:
on the basis of embodiment 1, step 2 further includes:
acquiring the total number of workers in a kitchen, and recording the total attendance times of each worker in the current month;
determining the illegal attendance times of each worker according to the total attendance times of each worker in the current month, wherein the total attendance times comprise the illegal attendance times;
calculating the staff behavior normalization rate in the kitchen according to the total number of the staff in the kitchen, the total attendance times of each staff in the current month and the violation attendance times of each staff;
Figure BDA0003627419950000181
wherein η represents staff behavior normative rate in the kitchen; delta represents an error factor, and the value range is (0.01, 0.015); n represents the total number of workers in the kitchen; i represents the current staff; beta is a i Representing the total attendance of the ith staff in the current month; m represents the total number of the staff with attendance violation in the current month, and the total number of the staff comprises the total number of the staff with attendance violation; j represents the current staff with the attendance violation; a. the j Representing the number of times of attendance of the jth staff with attendance violation in the current month;
recording the staff behavior normalization rate in the kitchen, determining a first recording result, simultaneously acquiring the total amount of garbage produced in the kitchen, and determining the garbage cleaning amount in the kitchen;
calculating the cleanliness of the kitchen according to the total amount of the garbage and the garbage cleaning amount;
Figure BDA0003627419950000182
wherein λ represents the cleanliness of the kitchen; s 2 Representing a total area of a cooktop inside the kitchen; s 1 Indicating the area of the cooking bench stained with oil or other foreign matter in the kitchen, and S 1 Is less than S 2 (ii) a m indicates that the interior of the kitchen is not cleaned in time every dayThe amount of waste of (a); w represents the total amount of garbage generated in the kitchen every day, and W is far larger than m;
Figure BDA0003627419950000183
representing a statistical error constant, and the value range is (-0.2, 0.2);
recording the cleanliness of the kitchen and determining a second recording result;
and evaluating the kitchen operation according to the first recording result and the second recording result, and transmitting the evaluation result to a client for displaying.
In this embodiment, the cleanliness may be a dimensionless number for the kitchen for cleanliness assessment, with greater cleanliness indicating a cleaner kitchen.
In this embodiment, the number of illegal attendance of each worker is recorded, for example, the number of illegal trips of the worker occurring on the day of attendance (for example, smoking during work, etc.).
The beneficial effects of the above technical scheme are: through staff's action standard rate in the accurate calculation kitchen to confirm first record result and the cleanliness of accurate calculation kitchen, thereby confirm the second record result, appraise the kitchen operation according to first record result and second record result, thereby make and monitor more washization to the kitchen, and upload the customer end with the appraisal result, be favorable to accurately realizing the supervision of customer to the kitchen, improved the intelligence and the accuracy of bright kitchen light stove control.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. The utility model provides a bright kitchen range intelligent monitoring method based on deep learning target detection which characterized in that includes:
step 1: acquiring video data based on a target field video stream, and simultaneously, performing data analysis on the video data to determine a neural network structure;
step 2: designing a target neural network based on the neural network structure in a targeted manner, performing model training on the video data based on the target neural network to generate a target model, and inputting the video data into the target model for monitoring;
and step 3: and displaying the monitoring result, and sending out real-time early warning and an early warning reason.
2. The intelligent monitoring method for the bright kitchen light range based on the deep learning target detection as claimed in claim 1, wherein in step 1, video data is obtained based on a target field video stream, and meanwhile, the working process of analyzing the video data to determine a neural network structure comprises the following steps:
reading the target site video stream, and collecting video data corresponding to the target site video stream based on a preset server;
extracting a data model related to the video data on the network according to a target program;
and cleaning the video data, analyzing the cleaned video data through the data model, and determining a neural network structure based on an analysis result.
3. The intelligent monitoring method for the bright kitchen light range based on the deep learning target detection as claimed in claim 1, wherein in the step 2, model training is performed on the video data based on the target neural network to generate a working process of a target model, and the working process comprises the following steps:
acquiring scene requirements of a target client;
determining a target neural network based on the video data and the scene requirements of the target client, and meanwhile, carrying out data annotation on the video data;
performing targeted modification and tuning on the target neural network based on the labeled video data, and performing model training on the target scene based on the tuned target neural network;
and generating the target model according to the training result.
4. The intelligent monitoring method for the bright kitchen light oven based on the deep learning target detection as claimed in claim 1, wherein in step 2, the target model comprises: and inputting video data corresponding to the target field video stream into the object detection model, the garbage can attribute classification model and the action prediction model, so as to realize intelligent monitoring on the bright kitchen range.
5. The intelligent bright kitchen range monitoring method based on deep learning target detection as claimed in claim 4, wherein the process of intelligently monitoring the bright kitchen range comprises the following steps:
reading the target site video stream, determining a video image corresponding to the current frame, and reading the area range of the human body in the video image corresponding to the current frame;
shearing the frame image according to the area range, and determining a target image based on a shearing result;
enhancing the human body characteristics in the target image, and simultaneously removing background interference of the target image to obtain a processed image;
reading the processed image, determining characteristic skeleton points of the human body, and determining a target person based on the characteristic skeleton points of the human body;
dynamically tracking the target personnel based on a monitoring device, and determining activity frame information of the target personnel;
when the activity frame information of the target person collected by the monitoring device meets a preset frame number, inputting the characteristic skeleton points of the target person and the activity frame information into the action prediction model for action prediction;
determining a current action of the target person based on the prediction result.
6. The intelligent bright kitchen range monitoring method based on deep learning target detection as claimed in claim 4, wherein the process of intelligently monitoring the bright kitchen range further comprises:
inputting the video data into the object detection model for static detection and identification, and determining an identification result;
determining a static chef cap and a static chef uniform of a target site based on the recognition result, performing interactive calculation on a target recognition area according to the recognition result, and determining a target chef corresponding to the static chef cap and the static chef uniform based on the calculation result;
recording the target chef based on the activity area in the target live video stream, and determining a recorded image set according to the time frame sequence;
determining a misrecognized image in the recorded image set by using target characteristics according to a preset algorithm, and filtering the misrecognized image to determine a filtered image set;
on the basis of the filtering image set, sequentially carrying out similarity matching on the activity area of the target chef corresponding to the filtering image of the current frame and the activity area of the target chef corresponding to the filtering image of the previous frame;
and determining the wearing condition of the target cook on the cook clothes and the cook cap according to the similarity matching result.
7. The intelligent bright kitchen range monitoring method based on deep learning target detection as claimed in claim 4, wherein the process of intelligently monitoring the bright kitchen range further comprises:
the method comprises the steps of obtaining a video picture of a target site, dividing the video picture based on the client requirements of a target client, and determining a garbage can area and a sensitive area;
respectively carrying out region detection on the garbage can region and the sensitive region, wherein the specific process comprises the following steps:
identifying trash cans in the trash can area based on the trash can attribute classification model, and determining the proceeding attributes of the trash cans;
wherein the proceeding attribute of the trash can comprises: the garbage can is full, uncovered and covered;
recognizing the garbage based on the object detection model, interacting with the sensitive area based on a recognition result, and generating an interaction result, wherein the interaction result comprises: refuse is in the sensitive area and refuse is not in the sensitive area.
8. The intelligent monitoring method for bright kitchen light range based on deep learning target detection as claimed in claim 1, wherein in step 2, after generating a target model and before inputting the video data into the target model for monitoring, further comprising:
reading a video detection point, determining the detection characteristics of the video detection point, and determining a classification label of the video data according to the detection characteristics;
clustering the video data based on the classification label, and determining a sub-video data block, wherein the sub-video data block comprises a plurality of video data with the same detection characteristic;
determining sample detection data based on the video detection points, and inputting the sample detection data serving as a check code into the sub-video data block for data check;
according to the checking result, determining the video data which does not accord with the checking code in the sub video data block, and taking the video data which does not accord with the checking code as irrelevant data;
packing the data of the irrelevant data to generate an irrelevant data set;
determining the data attribute of the irrelevant data in the irrelevant data set, matching the data attribute with the detection characteristics of the video detection point, and judging whether the irrelevant data in the irrelevant data set has effective data or not;
when the data attribute is matched with the detection feature of the video detection point, judging that effective data exists in the unrelated data set, and marking the effective data in the unrelated data set;
extracting the effective data based on the marking result, storing the effective data in a sub-video data block consistent with the detection characteristics of the video detection points, cleaning the residual ineffective data in the irrelevant data set, and generating an accurate sub-video data block based on the cleaning result;
otherwise, judging that no effective data exists in the irrelevant data set, and removing the irrelevant data set.
9. The intelligent monitoring method for the bright kitchen range based on the deep learning target detection as claimed in claim 1, wherein in step 3, the monitoring result is displayed, and real-time early warning and early warning reasons are sent out, including:
acquiring a monitoring result and determining data attribute information of the monitoring result;
matching the data attribute information with compression modes in a preset compression mode library, determining a target compression mode corresponding to the monitoring result, and calling a corresponding target compression algorithm based on the target compression mode to compress the monitoring result to obtain a target compression packet, wherein one compression mode corresponds to one compression algorithm;
constructing a transmission link, transmitting the target compression packet to a preset display terminal based on the transmission link, and decompressing the target compression packet by the display terminal;
classifying and displaying the monitoring results based on the decompression processing result, and determining environmental information and time information contained in each type of monitoring results based on the display result;
determining whether illegal or inelegant behaviors exist in each type of monitoring result based on the environment information and the time information;
when the illegal or inelegant behavior exists, determining the type of the illegal or inelegant behavior, determining a target alarm model based on the type, and alarming based on the target alarm mode;
otherwise, judging that the interior of the kitchen has no illegal or inelegant behaviors, and continuously monitoring the kitchen in real time.
10. The utility model provides a bright kitchen intelligent monitoring system of kitchen based on degree of deep learning target detection which characterized in that includes:
the data acquisition module is used for acquiring video data based on a target field video stream and simultaneously performing data analysis on the video data to determine a neural network structure;
the monitoring module is used for designing a target neural network based on the neural network structure in a targeted manner, performing model training on the video data based on the target neural network to generate a target model, and inputting the video data into the target model for monitoring;
and the early warning module is used for displaying the monitoring result and sending out real-time early warning and early warning reasons.
CN202210480478.3A 2022-05-05 2022-05-05 Intelligent open kitchen bright stove monitoring method and system based on deep learning detection Active CN114821476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210480478.3A CN114821476B (en) 2022-05-05 2022-05-05 Intelligent open kitchen bright stove monitoring method and system based on deep learning detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210480478.3A CN114821476B (en) 2022-05-05 2022-05-05 Intelligent open kitchen bright stove monitoring method and system based on deep learning detection

Publications (2)

Publication Number Publication Date
CN114821476A true CN114821476A (en) 2022-07-29
CN114821476B CN114821476B (en) 2022-11-22

Family

ID=82512231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210480478.3A Active CN114821476B (en) 2022-05-05 2022-05-05 Intelligent open kitchen bright stove monitoring method and system based on deep learning detection

Country Status (1)

Country Link
CN (1) CN114821476B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104239504A (en) * 2014-09-15 2014-12-24 金阿宁 Data processing method for establishing of doctor competency model
US10257436B1 (en) * 2017-10-11 2019-04-09 Adobe Systems Incorporated Method for using deep learning for facilitating real-time view switching and video editing on computing devices
CN110717448A (en) * 2019-10-09 2020-01-21 杭州华慧物联科技有限公司 Dining room kitchen intelligent management system
CN110889367A (en) * 2019-11-22 2020-03-17 贵州科学院(贵州省应用技术研究院) Deep learning-based kitchen worker wearing standard identification method
CN110909703A (en) * 2019-11-29 2020-03-24 中电福富信息科技有限公司 Detection method for chef cap in bright kitchen range scene based on artificial intelligence
CN111392280A (en) * 2020-03-19 2020-07-10 上海以睿数据科技有限公司 Garbage classification behavior development system
CN112365186A (en) * 2020-11-27 2021-02-12 中国电建集团海外投资有限公司 Health degree evaluation method and system for electric power information system
CN112507892A (en) * 2020-12-14 2021-03-16 公安部第三研究所 System, method and device for identifying and processing wearing of key personnel in special place based on deep learning, processor and storage medium thereof
CN112560759A (en) * 2020-12-24 2021-03-26 中再云图技术有限公司 Bright kitchen range standard detection and identification method based on artificial intelligence, storage device and server
CN113392715A (en) * 2021-05-21 2021-09-14 上海可深信息科技有限公司 Chef cap wearing detection method
CN113628172A (en) * 2021-07-20 2021-11-09 杭州兼果网络科技有限公司 Intelligent detection algorithm for personnel handheld weapons and smart city security system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104239504A (en) * 2014-09-15 2014-12-24 金阿宁 Data processing method for establishing of doctor competency model
US10257436B1 (en) * 2017-10-11 2019-04-09 Adobe Systems Incorporated Method for using deep learning for facilitating real-time view switching and video editing on computing devices
CN110717448A (en) * 2019-10-09 2020-01-21 杭州华慧物联科技有限公司 Dining room kitchen intelligent management system
CN110889367A (en) * 2019-11-22 2020-03-17 贵州科学院(贵州省应用技术研究院) Deep learning-based kitchen worker wearing standard identification method
CN110909703A (en) * 2019-11-29 2020-03-24 中电福富信息科技有限公司 Detection method for chef cap in bright kitchen range scene based on artificial intelligence
CN111392280A (en) * 2020-03-19 2020-07-10 上海以睿数据科技有限公司 Garbage classification behavior development system
CN112365186A (en) * 2020-11-27 2021-02-12 中国电建集团海外投资有限公司 Health degree evaluation method and system for electric power information system
CN112507892A (en) * 2020-12-14 2021-03-16 公安部第三研究所 System, method and device for identifying and processing wearing of key personnel in special place based on deep learning, processor and storage medium thereof
CN112560759A (en) * 2020-12-24 2021-03-26 中再云图技术有限公司 Bright kitchen range standard detection and identification method based on artificial intelligence, storage device and server
CN113392715A (en) * 2021-05-21 2021-09-14 上海可深信息科技有限公司 Chef cap wearing detection method
CN113628172A (en) * 2021-07-20 2021-11-09 杭州兼果网络科技有限公司 Intelligent detection algorithm for personnel handheld weapons and smart city security system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙宝聪: "基于图像检测的机场人员异常行为分析技术研究", 《数字通信世界》 *
陆游飞等: "一种结合图像信息的视频行人检测网络研究", 《杭州电子科技大学学报(自然科学版)》 *

Also Published As

Publication number Publication date
CN114821476B (en) 2022-11-22

Similar Documents

Publication Publication Date Title
US7840515B2 (en) System architecture and process for automating intelligent surveillance center operations
CN110166741A (en) Environment control method, device, equipment and storage medium based on artificial intelligence
CN101795395B (en) System and method for monitoring crowd situation
CN109657624A (en) Monitoring method, the device and system of target object
CN108417274A (en) Forecast of epiphytotics method, system and equipment
CN109886555A (en) The monitoring method and device of food safety
CN110298254A (en) A kind of analysis method and system for personnel's abnormal behaviour
CN108304816B (en) Identity recognition method and device, storage medium and electronic equipment
CN110633697A (en) Intelligent monitoring method for kitchen sanitation
CN109168052A (en) The determination method, apparatus and calculating equipment of service satisfaction
CN111222373A (en) Personnel behavior analysis method and device and electronic equipment
CN115691034A (en) Intelligent household abnormal condition warning method, system and storage medium
CN111506635A (en) System and method for analyzing residential electricity consumption behavior based on autoregressive naive Bayes algorithm
CN108874910A (en) The Small object identifying system of view-based access control model
CN114724332A (en) Food material safety monitoring system and monitoring method
CN111191498A (en) Behavior recognition method and related product
CN113469080B (en) Method, system and equipment for collaborative perception of individual, group and scene interaction
CN114821476B (en) Intelligent open kitchen bright stove monitoring method and system based on deep learning detection
CN117726367A (en) Intelligent site selection method and device and storage medium
CN113591550A (en) Method, device, equipment and medium for establishing automatic personal preference detection model based on pupil change
CN116740885A (en) Smoke flame alarm method and device, electronic equipment and storage medium
CN115240277A (en) Security check behavior monitoring method and device, electronic equipment and storage medium
CN109934099A (en) Reminding method and device, storage medium, the electronic device of placement location
CN112133436B (en) Health warning method and system based on big data analysis and readable storage medium
CN114359783A (en) Abnormal event detection method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant