CN114067369A - Dining table state identification method and system based on image identification - Google Patents

Dining table state identification method and system based on image identification Download PDF

Info

Publication number
CN114067369A
CN114067369A CN202210048485.6A CN202210048485A CN114067369A CN 114067369 A CN114067369 A CN 114067369A CN 202210048485 A CN202210048485 A CN 202210048485A CN 114067369 A CN114067369 A CN 114067369A
Authority
CN
China
Prior art keywords
dining table
state
guest
dining
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210048485.6A
Other languages
Chinese (zh)
Other versions
CN114067369B (en
Inventor
杨恒
龙涛
阮仕海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aimo Technology Co ltd
Original Assignee
Shenzhen Aimo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aimo Technology Co ltd filed Critical Shenzhen Aimo Technology Co ltd
Priority to CN202210048485.6A priority Critical patent/CN114067369B/en
Publication of CN114067369A publication Critical patent/CN114067369A/en
Application granted granted Critical
Publication of CN114067369B publication Critical patent/CN114067369B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a dining table state identification method and system based on image identification, relates to the technical field of catering services, and solves the technical problems that an existing dining table turnover statistical method is large in error and high in cost, and the cleaning condition of a dining table cannot be supervised in time. The method comprises the following steps: s11, marking areas of all dining tables from all cameras; s12, acquiring the image of each camera, detecting and identifying the human body, and classifying the identified human body; s13, associating the dining table with the identified human body; s14, judging the state of the dining table related to the human body; s15, counting the table turnover rate of the table and identifying the non-compliant behavior of the table which is not cleaned in time according to the judgment result; s16, the result is output, and the process returns to the step S12 to perform the next recognition. The method has high accuracy and low application cost, and can effectively supervise timely cleaning behaviors of the dining table after consumption.

Description

Dining table state identification method and system based on image identification
Technical Field
The invention relates to the technical field of catering services, in particular to a dining table state identification method and a dining table state identification system based on image identification.
Background
The existing restaurant turnover method is mostly based on a manual mode. Particularly, the traditional platform overturning mode is that in-store patrol personnel are arranged to patrol, and when the desk in the store is found to be free and clean, the in-store patrol personnel use the interphone to inform the staff at the guest table of the number of the free tables where guests can sit and the number of people who can sit. This manual mode has increased staff's work burden, and the result accuracy can't evaluate moreover.
In other schemes, calculation is performed through consumption data of each dining table recorded by the ordering system, but the ordering system cannot count actual turn-over times under the condition that the same table guest takes an order for many times or takes an order, namely, buys an order.
The problem that above-mentioned two kinds of modes still exist lies in, can't monitor and feedback to the not compliance action that the staff did not clear up the dining table in time, and then increased customer waiting time, reduced customer consumption and experienced.
Disclosure of Invention
The invention aims to provide a dining table state identification method and a dining table state identification system based on image identification, and aims to solve the technical problems that an existing dining table overturning statistical method is large in error and high in cost, and the cleaning condition of a dining table cannot be monitored in time. The technical effects that can be produced by the preferred technical scheme in the technical schemes provided by the invention are described in detail in the following.
In order to achieve the purpose, the invention provides the following technical scheme:
the invention provides a dining table state identification method based on image identification, which comprises the following steps:
s11, marking areas of all dining tables from all cameras;
s12, acquiring the image of each camera, detecting and identifying the human body, and classifying the identified human body;
s13, associating the dining table for the identified human body; judging whether the dining table is empty or not;
s14, judging the state of the dining table related to the human body;
s15, counting the table turnover rate of the dining table and identifying the non-compliant behavior of the dining table which is not cleaned in time according to the judgment result;
s16, the result is output, and the process returns to the step S12 to perform the next recognition.
Further, step S12 includes the following steps:
s121, detecting the human body of the image acquired from the camera by adopting a human body identification method to obtain a rectangular frame of each human body;
and S122, extracting the small pictures from the rectangular frames of the human bodies, and identifying the dresses by adopting a dressing identification method to divide the human bodies into guests and employees.
Further, the human body identification method is a deep learning target detection method; the dressing identification method is a deep learning classification method.
Further, step S13 includes the following steps:
s131, calculating the IOU value of each dining table area and each rectangular frame of the human body;
s132, whether the IOU value is larger than a set threshold value or not; if yes, go to step S133; otherwise, go to step S134;
s133, associating the human body with the dining table, and identifying the category of the associated human body; step S135 is executed;
s134, the human body is not associated with the dining table;
and S135, judging whether the dining table is empty or not.
Further, the calculation method of the IOU value is as follows:
IOU=area0/(area1+ area2- area0);
wherein area0 is the area of the portion where the acquisition region coincides with the rectangular frame of the human body, area1 is the area of the acquisition region, and area2 is the area of the rectangular frame of the human body.
Further, step S14 includes the following steps:
s141, initializing the state of each dining table to be an empty table; the state of the dining table also comprises using and cleaning, wherein the cleaning comprises waiting for cleaning;
s142, continuously for a minutes, detecting that the dining table is associated with the guest, and updating the initial state of the dining table to be in use if the guest is confirmed to sit on the seat;
s143, continuously performing for b minutes, if the state of the dining table in use is no longer associated with any guest, updating the state of the dining table to be confirmed and cleared, and recording the absence time of the guest;
s144, continuously for m minutes, if the state of the dining table to be confirmed and cleared is the dining table to be confirmed and the guest is associated with the dining table again, and the guest is at least one guest associated with the current dining table, updating the state of the dining table to be in use, and recording the return time of the guest;
or, continuously for c minutes, if the state of the dining table to be confirmed and cleaned is the empty table, the state of the dining table is updated to the empty table, and the cleaning end time is updated;
or if the time exceeds d minutes, the state of the dining table is the dining table to be confirmed to be cleaned, no guest is associated, and the dining table is not identified as an empty table, the state of the dining table is updated to be cleaned, and the time to be cleaned is updated;
or continuously for e minutes, if the state of the dining table to be confirmed to be cleaned is detected to be associated with employees and the recognized employee behavior is to clean the dining table, updating the state of the dining table to be cleaned and updating the cleaning time;
s145, if the time exceeds f minutes, the dining table is to be cleaned or cleaned, and if the dining table is not identified to be an empty table, the state of the dining table is reset to be an initial state;
or continuously for g minutes, if the state of the dining table to be cleaned or cleaned is detected to be associated with a new guest, and the guest is on the table, resetting the state of the dining table to the initial state;
or continuously carrying out h minutes, wherein the state of the dining table is to be cleaned or cleaned, the dining table is not identified as an empty table, and any human body is not associated, and the state of the dining table is updated to be an empty table to be confirmed;
s146, continuously performing j minutes, detecting that the dining table in the state of an empty table to be confirmed is associated with staff, and updating the state of the dining table to be cleared if the recognized staff acts to clear the dining table, and updating the clearing time;
s147, continuously carrying out k minutes, wherein the state of the dining table is the cleaned dining table, and if the dining table is detected to be associated with a new guest and the guest is on the table, the state of the dining table is reset to be the initial state;
or if the time exceeds j minutes, the state of the dining table is the cleaning dining table, and no staff is detected to be associated with the dining table, the state of the dining table is updated to be an empty table.
Further, one time of switching any state of the dining table to an empty table state is changed into one time of use times; the turnover rate of the table = total number of uses of all the tables/total number of tables-1.
Further, the time for switching the state of the dining table from using to being confirmed to be cleaned or to be cleaned represents the guest leaving time, and the time for switching to the empty table is the cleaning time, so that the cleaning duration = the cleaning time-the guest leaving time; and when the cleaning time length exceeds a preset value, indicating that the staff does not clean the dining table in time.
Further, before step S11 is executed, the method further includes acquiring images of the guest and the employee, identifying the employee clothing and the guest clothing by using the deep learning classification method, and training the deep learning classification method according to the identification result; and acquiring images of the empty table state and the non-empty table state of the dining table, identifying the empty table state of the dining table by adopting the deep learning classification method, and training the deep learning classification method according to the identification result.
In another aspect of the present invention, there is also provided a dining table state recognition system based on image recognition, including: the configuration module is used for configuring and editing a protocol file, wherein the protocol file is binary format data; the analysis module is used for analyzing the protocol file into JSON format data; a storage medium having stored thereon a computer program that, when executed, implements the image recognition-based table state recognition method described above; a processor for executing the computer program stored in the storage medium to cause the processor to execute the image recognition-based table state recognition method described above; a memory for storing images called by the processor; the display module is used for displaying the output result of the dining table state identification method based on the image identification; the processor is connected with the configuration module, the analysis module, the storage medium and the display module.
The implementation of one of the technical schemes of the invention has the following advantages or beneficial effects:
the invention adopts an image recognition technology to recognize whether guests and staff exist around the dining table, whether the dining table is empty or not and combines time sequence information, comprehensively judges the states of the empty dining table, the dining table to be cleaned and cleaned during use, and finally counts the using times of the dining table and the cleaning time length of the dining table according to the change condition of the dining table state, thereby calculating the turnover rate and detecting whether the dining table is cleaned in time or not. The method has the advantages that the accuracy rate of the restaurant turnover rate calculated by the method is high, timely cleaning behaviors after dining table consumption can be effectively supervised, further, the restaurant customer experience is improved, and the labor cost of a restaurant caused by the turnover rate improvement and the cleaning behavior supervision is greatly reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
fig. 1 is a flowchart of a method for recognizing a dining table state based on image recognition according to an embodiment of the present invention;
fig. 2 is a flowchart of step S12 in a method for recognizing a dining table state based on image recognition according to an embodiment of the present invention;
fig. 3 is a flowchart of step S13 in the method for recognizing a dining table state based on image recognition according to the embodiment of the present invention;
fig. 4 is a flowchart of step S14 in a method for recognizing a dining table state based on image recognition according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a rectangular frame and an acquisition area of a human body of a dining table state identification method based on image identification according to an embodiment of the present invention.
Detailed Description
In order that the objects, aspects and advantages of the present invention will become more apparent, various exemplary embodiments will be described below with reference to the accompanying drawings, which form a part hereof, and in which are shown by way of illustration various exemplary embodiments in which the invention may be practiced. The same numbers in different drawings identify the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. It is to be understood that they are merely examples of processes, methods, apparatus, etc. consistent with certain aspects of the present disclosure as detailed in the appended claims, and that other embodiments may be used or structural and functional modifications may be made to the embodiments set forth herein without departing from the scope and spirit of the present disclosure.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," and the like are used in the orientations and positional relationships illustrated in the accompanying drawings for the purpose of facilitating the description of the present invention and simplifying the description, and do not indicate or imply that the elements so referred to must have a particular orientation, be constructed in a particular orientation, and be operated. The terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. The term "plurality" means two or more. The terms "coupled" and "connected" are to be construed broadly and may include, for example, a fixed connection, a removable connection, a unitary connection, a mechanical connection, an electrical connection, a communicative connection, a direct connection, an indirect connection via intermediate media, and may include, but are not limited to, a connection between two elements or an interactive relationship between two elements. The term "and/or" includes any and all combinations of one or more of the associated listed items. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In order to explain the technical solution of the present invention, the following description is made by way of specific examples, which only show the relevant portions of the embodiments of the present invention.
The first embodiment is as follows:
as shown in fig. 1 to 5, the present invention provides a dining table state identification method based on image identification, comprising the following steps:
and S11, marking the areas of all dining tables from all cameras. In the step, all dining table identification shooting areas in the dining room are manually shot, and the areas are closest to the corresponding cameras and are areas with the best shooting effect. Specifically, the areas of all tables are neither repeatedly nor exhaustively marked in all cameras. The specific marking form is determined according to the specific shape of the dining table, if the rectangular dining table marks 4 vertexes, the circular dining table is the whole circumference, the same dining table can be shot by the multiple cameras, only one of the multiple cameras is selected during manual marking, and the situation that the statistics on the same dining table is repeated is avoided. Further, the collection area needs to be expanded into a dining table consumption area, including a dining table area and a surrounding chair area, specifically referring to fig. 5;
and S12, acquiring the image of each camera, detecting and identifying the human body, and classifying the identified human body. Specifically, the camera acquires a real-time image of each dining table, and captures one original image at regular intervals (e.g., 1 s), or captures a plurality of original images (e.g., 3 to 10 images) per minute, specifically according to the actual precision requirement, and when the precision requirement is high, one image is captured per second. The acquired original image is actively called by the MCU or the processor (see embodiment two), and the image processing is carried out by the MCU or the processor to identify the human body in the image. It is known that the restaurant population is generally only consumers and staff of service, and therefore, it is also necessary to classify the identified persons for subsequent use. The detailed steps of the method are as follows:
and S121, detecting the human body of the image acquired from the camera by adopting a human body identification method to obtain a rectangular frame of each human body. The human body recognition method in this step is a deep learning target detection method, which is the prior art and is not described herein again. The rectangular frame of the human body is convenient for subsequently associating the human body with the dining table and serving as a standard for subsequently judging seating of guests. Each identified human body corresponds to a rectangular frame. It should be noted that the size of the rectangular frame of the human body varies with the actions of standing, sitting, bending and the like of the human body;
and S122, extracting the small pictures from the rectangular frames of each human body, and identifying the dresses by adopting a dressing identification method to divide the human body into a guest and a worker. In the step, the person is identified as a guest or a staff by dressing, for identifying the staff, the whole human body area is extracted, and the deep learning method can automatically extract the characteristics for classification and identification. Generally, after the employee identifies, other persons can be regarded as the guest, and of course, in order to improve the identification accuracy, the employee can also identify the guest. The dressing identification method is a trained deep learning classification method, which is the prior art and is not described herein.
And S13, associating the dining table for the identified human body, and judging whether the dining table is empty. Two steps are associated, one is that the guest is associated with the dining table and shows that the guest sits for dining or checks out for leaving; one is that the employee is associated with the table, indicating that the employee is cleaning the table. The association is that the rectangular frame of the human body is allocated to the table number, and then the dressing identification result (guest/employee) corresponding to the rectangular frame of the human body is combined to know whether each table has guests or employees, and the guests and the employees are included in the association data table, so that the state of the table can be judged according to the following steps. The step judges the empty table state of the current collected image and provides basis for the actual state switching of the table below. The method comprises the following specific steps:
s131, calculating the acquisition area of each dining table and the IOU value of the rectangular frame of each human body. The IOU is an Intersection-over-unity (IOU), and one concept used in target detection is the overlapping rate of the generated candidate frame (candidate frame) and the original labeled frame (ground channel frame), i.e. the ratio of their Intersection to Union. The optimal situation is complete overlap, i.e. a ratio of 1. Further, the calculation method of the IOU value is as follows:
IOU=area0/(area1+ area2- area0) (1);
wherein area0 is the area of the overlapping part of the acquisition region and the rectangular frame of the human body, area1 is the area of the acquisition region, and area2 is the area of the rectangular frame of the human body.
S132, whether the IOU value is larger than a set threshold value or not; if yes, go to step S133; otherwise, step S134 is executed. In this embodiment, the threshold is set to 0.4;
s133, associating the human body with the dining table, and identifying the category of the associated human body; step S135 is executed;
s134, the human body is not associated with the dining table;
and S135, judging whether the dining table is empty or not. And identifying the empty table state of the dining table by adopting a deep learning classification method, judging the empty table state of the current collected image, providing a basis for switching the actual state of the dining table, and prompting related personnel to clean the dining table if the dining table is judged to be a non-empty table. Generally, when all tables are empty, the business can be started.
And S14, judging the state of the dining table related to the human body. Specifically, the dining table state can be divided into an empty table and a non-empty table according to whether the dining table is related to a human body, and the non-empty table is further divided into a use state and a cleaning state, wherein the cleaning state comprises the state to be cleaned. The empty table means that the dining table is not associated with guests or staff, the dining table is restored to a standard state, the standard state means that the table top is clean, or the table top is clean and unused bowls and chopsticks are placed on the dining table; when the dining table is used, guests are associated with the dining table, the guests are seated, and the guests leave the seat without paying; the cleaning process means that the dining table is associated with staff, and the recognized staff acts as cleaning the dining table; when the dining table is to be cleaned, guests and staff are not associated with the dining table, and the table top is not clean (such as the presence of used bowls, chopsticks and garbage which are irregularly placed). In the step, the dining table is initialized to be an empty table, and then the dining table is subjected to using, cleaning or to be cleaned and empty table in sequence, namely a complete consumption cleaning process. The switching includes empty table to use, in use to empty table to be confirmed, to use in to clear to be confirmed, to empty table to be confirmed in clear, to clear or to clear, to clear or to clear to empty table to be confirmed, to empty table to be confirmed. The specific transformation steps are as follows:
s141, initializing the state of each dining table into an empty table;
s142, continuously for a minutes (e.g., 10 minutes), if it is detected that the guest is associated with the table and the guest is confirmed to be seated, the initial state of the table is updated to the in-use state. After the MCU or the processor retrieves all tables associated with guests from the above association data table, it is further required to confirm whether the guests are seated. The criterion for determining whether the guest is seated may be a change in the size of the rectangular frame configured for each guest. It is known that the height of the rectangular frame is significantly greater (e.g., 1 times) when the person is standing than when the person is sitting. Thus, when the height of the rectangular frame is smaller than the height when the guest is standing (e.g. the height ratio is smaller than 0.5) when the guest is detected to sit down, it indicates that the guest has sat up. Certainly, other identification modes can be provided, for example, in the existing human body posture identification technology, the sitting posture state of the guest is identified through the human body posture, and the guest is indicated to be seated;
and S143, continuously performing for b minutes (such as 2 minutes), if the state is that no guest is associated with the dining table in use, updating the state of the dining table to be checked and cleared, and recording the absence time of the guest. At the moment, the guest may leave temporarily or leave directly, and if the guest returns after the guest leaves temporarily, the dining table state is switched into the use state; if the guest is confirmed to directly leave, the dining table state is switched into the cleaning state or the to-be-cleaned state. Of course, it is also possible that guests leave, do not order, the table does not need to be cleaned, or the table is quickly cleaned, and thus the table may also be cut into an empty table condition. Step S144, the various situations are confirmed one by one;
and S144, continuously performing m minutes (such as 2 minutes), wherein the state is the dining table to be confirmed to be cleared, the guests are associated again, and the guests are at least one guest associated with the current dining table, updating the state of the dining table to be in use, and recording the return time of the guests. At this time, the guest is indicated to temporarily leave and return to the meal. Here, a face recognition technology or a human body recognition technology (such as a deep learning target detection method) is required to be adopted to identify the guest so as to confirm whether the guest is the guest related to the dining table; the above identification techniques are conventional and are not described herein;
or, c minutes (e.g., 1 minute) continues, the table whose status is to be confirmed as being cleared is identified as being empty, the status of the table is updated to be empty, and the clearing end time is updated. At the moment, the dining table does not need to be cleaned, or the dining table is quickly cleaned;
or if the time exceeds d minutes (for example, 20 minutes), the state is the dining table to be confirmed to be cleaned, no guest is associated any more, and no empty table is identified, the state of the dining table is updated to be cleaned, and the time to be cleaned is updated. At the moment, the guest indicates that the guest has consumed the leave seat, but not temporarily leaves midway, and the dining table is in a state to be cleaned;
or continuously for e minutes (such as 1 minute), the state is the dining table to be confirmed to be cleaned, the state of the dining table is updated to be cleaned when the staff is detected to be associated and the recognized staff acts to clean the dining table, and the cleaning time is updated. In order to increase the identification accuracy, a behavior identification method (which is the prior art) is adopted in the step to specifically identify whether the staff is cleaning the table. Such as forward bending of the body of the employee, hand contact with the table, etc.; certainly, the staff can be identified to clean the dining table according to the fact whether the height of the rectangular frame of the staff is reduced (for example, by 0.4 time), whether bowls and chopsticks of the dining table are reduced, and the like;
and S145, exceeding f minutes (60 minutes), the table is to be cleaned or cleaned, and if the table is not identified to be an empty table, the state of the table is reset to the initial state. In this step, regardless of unknown abnormal conditions or whether the table is actually not cleaned, the current table identification process is forcibly ended, and the table is reset to the initial state (empty table). At the moment, the dining table can record non-compliant behaviors which are not cleared by the staff in time;
or, g minutes (2 minutes) continuously, the state is the table to be cleaned or cleaned, and if it is detected that a new guest is associated therewith and the guest has gone to the table, the state of the table is reset to the initial state. At the moment, a new guest already sits in the dining table, and the dining table at the moment is completely switched from an empty table to an empty table, namely consumed once;
or, continuously for h minutes (1 minute), the state is the table to be cleaned or being cleaned, it is not recognized as an empty table, and no human body is associated, the state of the table is updated to an empty table to be confirmed. At this time, the table in this state is not completed with all the cleaning processes, and the staff member will continue to clean the table (e.g. wipe the table). The identification result can be one or more conditions of oil stains identified on the dining table, bowls and chopsticks for standby use not placed on the dining table or table cloth not laid on the dining table;
s146, continuously performing j minutes (1 minute), detecting that the table is empty to be confirmed, and if the staff is associated with the table and the recognized staff acts as cleaning the table, updating the state of the table to be cleaning and the cleaning time;
s147, continuously carrying out k minutes (2 minutes), wherein the state is the dining table in clearing, and if the dining table is detected to be associated with a new guest and the guest is on the table, the state of the dining table is reset to be the initial state;
or, if j minutes (5 minutes) is exceeded, the table is in a clean state and no employee is detected as being associated with the table, the state of the table is updated to an empty table. At this point, it is confirmed that the table is cleaned, rather than the staff temporarily leaving during the cleaning process.
It should be noted that the time a-j, m may be obtained according to historical consumption big data statistics of the restaurant, or may be determined through experience of a store owner or an industry expert.
And S15, counting the table turnover rate of the table and identifying the non-compliant behavior that the table is not cleaned in time according to the judgment result. Specifically, one switching from any one state of the dining table to an empty table state is the number of times of one use, and the turnover rate calculation formula of the dining table is as follows:
the turnover rate of the table = total number of times of use of all the tables/total number of the tables-1 (2);
further, in the identification process of the number of times of using the dining table each time, the time for switching the state of the dining table from using to cleaning to be confirmed represents the guest leaving time, and the time for switching to the empty table is the cleaning time, then the cleaning time length formula is as follows:
cleaning duration = cleaning clean time-guest leaving time (3);
once the preset value is exceeded (such as 5 minutes), the occurrence of the non-compliance behavior that the staff do not clean the dining table in time is indicated;
s16, the result is output, and the process returns to the step S12 to perform the next recognition.
Further, before step S11 is executed, the method further includes collecting images of the guest and the employee, identifying the employee clothing and the guest clothing by using a deep learning classification method, and training the deep learning classification method according to the identification result; and collecting images of the empty table state and the non-empty table state (during use, during cleaning and to be cleaned) of the dining table, identifying the empty table state of the dining table by adopting a deep learning classification method, and training the deep learning classification method according to the identification result. The invention adopts a deep learning classification method and a deep learning target detection method, has good autonomous learning ability, can automatically perfect according to related data, and can effectively improve the accuracy of the identification result of the method.
In summary, according to the dining table state identification method based on image identification of the embodiment, whether guests and staff are around the dining table or not and whether the dining table is empty or not are identified by adopting an image identification technology, the states of the dining table during use, to be cleaned and during cleaning are comprehensively judged by combining time sequence information, and finally the using times of the dining table and the cleaning duration of the dining table are counted according to the change condition of the dining table state, so that the turnover rate is calculated and the non-compliant behavior that the staff do not clean the dining table in time is detected. The method has the advantages that the accuracy rate of the restaurant turnover rate calculated by the method is high, timely cleaning behaviors after dining table consumption can be effectively supervised, further, the restaurant customer experience is improved, and the labor cost of a restaurant caused by the turnover rate improvement and the cleaning behavior supervision is greatly reduced.
Example two:
the invention also provides a dining table state identification system based on image identification, which comprises the following components: the device comprises a configuration module, a storage medium, a processor, a memory and a display module, wherein the processor is connected with the configuration module, the analysis module, the storage medium and the display module. Further, the configuration module is used for configuring and editing the protocol file, wherein the protocol file is binary format data; the analysis module is used for analyzing the protocol file into JSON format data; a storage medium having a computer program stored thereon, wherein the computer program is executed to implement the method for recognizing the dining table state based on image recognition according to the first embodiment; a processor for executing the computer program stored in the storage medium to make the processor execute the table state identification method based on image identification according to the first embodiment; the memory is used for storing the picture data called by the processor; and the display module is used for displaying the output result of the dining table state identification method based on the image identification in the embodiment one.
It should be noted that, as will be understood by those skilled in the art, all or part of the features/steps for implementing the above-described method embodiments may be implemented by a method, a data processing system or a computer program, and the features may be implemented without hardware, entirely in software, or by a combination of hardware and software. The computer program may be stored in one or more computer-readable storage media, and the computer program may be executed by a processor (for example) to perform the steps of the method for recognizing the status of the dining table based on image recognition. The aforementioned storage medium and memory that can store the program code include: an electrostatic hard disk, a solid state hard disk, a random access memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), an optical storage device, a magnetic storage device, a flash memory, a magnetic or optical disk, and/or combinations thereof, may be implemented by any type of volatile or non-volatile storage device, or combinations thereof. The display module comprises a computer terminal, a display screen and a mobile phone terminal.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (10)

1. A dining table state identification method based on image identification is characterized by comprising the following steps:
s11, marking areas of all dining tables from all cameras;
s12, acquiring the image of each camera, detecting and identifying the human body, and classifying the identified human body;
s13, associating the dining table for the identified human body; judging whether the dining table is empty or not;
s14, judging the state of the dining table related to the human body;
s15, counting the table turnover rate of the dining table and identifying the non-compliant behavior of the dining table which is not cleaned in time according to the judgment result;
s16, the result is output, and the process returns to the step S12 to perform the next recognition.
2. The method for recognizing a dining table state based on image recognition as claimed in claim 1, wherein the step S12 comprises the steps of:
s121, detecting the human body of the image acquired from the camera by adopting a human body identification method to obtain a rectangular frame of each human body;
and S122, extracting the small pictures from the rectangular frames of the human bodies, and identifying the dresses by adopting a dressing identification method to divide the human bodies into guests and employees.
3. The image recognition-based dining table state recognition method of claim 2, wherein the human body recognition method is a deep learning target detection method;
the dressing identification method is a deep learning classification method.
4. The method for recognizing the dining table state based on the image recognition as claimed in claim 2, wherein the step S13 comprises the steps of:
s131, calculating the IOU value of each dining table area and each rectangular frame of the human body;
s132, whether the IOU value is larger than a set threshold value or not; if yes, go to step S133; otherwise, go to step S134;
s133, associating the human body with the dining table, and identifying the category of the associated human body; step S135 is executed;
s134, the human body is not associated with the dining table;
and S135, judging whether the dining table is empty or not.
5. The method for recognizing the dining table state based on the image recognition as claimed in claim 4, wherein the IOU value is calculated by:
IOU=area0/(area1+ area2- area0);
wherein area0 is the area of the portion where the acquisition region coincides with the rectangular frame of the human body, area1 is the area of the acquisition region, and area2 is the area of the rectangular frame of the human body.
6. The method for recognizing the dining table state based on the image recognition as claimed in claim 4, wherein the step S14 comprises the steps of:
s141, initializing the state of each dining table to be an empty table; the state of the dining table also comprises using and cleaning, wherein the cleaning comprises waiting for cleaning;
s142, continuously for a minutes, detecting that the dining table is associated with the guest, and updating the initial state of the dining table to be in use if the guest is confirmed to sit on the seat;
s143, continuously performing for b minutes, if the state of the dining table in use is no longer associated with any guest, updating the state of the dining table to be confirmed and cleared, and recording the absence time of the guest;
s144, continuously for m minutes, if the state of the dining table to be confirmed and cleared is the dining table to be confirmed and the guest is associated with the dining table again, and the guest is at least one guest associated with the current dining table, updating the state of the dining table to be in use, and recording the return time of the guest;
or, continuously for c minutes, if the state of the dining table to be confirmed and cleaned is the empty table, the state of the dining table is updated to the empty table, and the cleaning end time is updated;
or if the time exceeds d minutes, the state of the dining table is the dining table to be confirmed to be cleaned, no guest is associated, and the dining table is not identified as an empty table, the state of the dining table is updated to be cleaned, and the time to be cleaned is updated;
or continuously for e minutes, if the state of the dining table to be confirmed to be cleaned is detected to be associated with employees and the recognized employee behavior is to clean the dining table, updating the state of the dining table to be cleaned and updating the cleaning time;
s145, if the time exceeds f minutes, the dining table is to be cleaned or cleaned, and if the dining table is not identified to be an empty table, the state of the dining table is reset to be an initial state;
or continuously for g minutes, if the state of the dining table to be cleaned or cleaned is detected to be associated with a new guest, and the guest is on the table, resetting the state of the dining table to the initial state;
or continuously carrying out h minutes, wherein the state of the dining table is to be cleaned or cleaned, the dining table is not identified as an empty table, and any human body is not associated, and the state of the dining table is updated to be an empty table to be confirmed;
s146, continuously performing j minutes, detecting that the dining table in the state of an empty table to be confirmed is associated with staff, and updating the state of the dining table to be cleared if the recognized staff acts to clear the dining table, and updating the clearing time;
s147, continuously carrying out k minutes, wherein the state of the dining table is the cleaned dining table, and if the dining table is detected to be associated with a new guest and the guest is on the table, the state of the dining table is reset to be the initial state;
or if the time exceeds j minutes, the state of the dining table is the cleaning dining table, and no staff is detected to be associated with the dining table, the state of the dining table is updated to be an empty table.
7. A dining table state identification method based on image identification as claimed in claim 6, wherein one time of any state of the dining table to an empty table state is switched to one time of use;
the turnover rate of the table = total number of uses of all the tables/total number of tables-1.
8. A table state identification method based on image identification as claimed in claim 6, characterized in that the state of the table is switched from using to being confirmed to be cleaned or the time to be cleaned represents the guest leaving time, and the time to be switched to the empty table is the time to be cleaned, and the cleaning duration = cleaning time-guest leaving time;
and when the cleaning time length exceeds a preset value, indicating that the staff does not clean the dining table in time.
9. A dining table state recognition method based on image recognition as claimed in claim 6, wherein before step S11, the method further comprises collecting images of the guest and employee, recognizing the employee 'S clothing and guest' S clothing by using the deep learning classification method, and training the deep learning classification method according to the recognition result;
and acquiring images of the empty table state and the non-empty table state of the dining table, identifying the empty table state of the dining table by adopting the deep learning classification method, and training the deep learning classification method according to the identification result.
10. A dining table state identification system based on image recognition is characterized by comprising:
the configuration module is used for configuring and editing a protocol file, wherein the protocol file is binary format data;
the analysis module is used for analyzing the protocol file into JSON format data;
a storage medium having stored thereon a computer program that, when executed, implements the image recognition-based table state recognition method of any one of claims 1 to 9;
a processor for executing the computer program stored in the storage medium to cause the processor to execute the image recognition-based table state recognition method according to any one of claims 1 to 9;
a memory for storing images called by the processor;
a display module, configured to display an output result of the image recognition-based table state recognition method according to any one of claims 1 to 9;
the processor is connected with the configuration module, the analysis module, the storage medium and the display module.
CN202210048485.6A 2022-01-17 2022-01-17 Dining table state identification method and system based on image identification Active CN114067369B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210048485.6A CN114067369B (en) 2022-01-17 2022-01-17 Dining table state identification method and system based on image identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210048485.6A CN114067369B (en) 2022-01-17 2022-01-17 Dining table state identification method and system based on image identification

Publications (2)

Publication Number Publication Date
CN114067369A true CN114067369A (en) 2022-02-18
CN114067369B CN114067369B (en) 2022-05-24

Family

ID=80231133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210048485.6A Active CN114067369B (en) 2022-01-17 2022-01-17 Dining table state identification method and system based on image identification

Country Status (1)

Country Link
CN (1) CN114067369B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512970A (en) * 2016-01-05 2016-04-20 江苏木盟智能科技有限公司 Dining table service system
CN109858949A (en) * 2018-12-26 2019-06-07 秒针信息技术有限公司 A kind of customer satisfaction appraisal procedure and assessment system based on monitoring camera
US20190213438A1 (en) * 2018-01-05 2019-07-11 Irobot Corporation Mobile Cleaning Robot Artificial Intelligence for Situational Awareness
CN110211000A (en) * 2018-02-28 2019-09-06 阿里巴巴集团控股有限公司 Table state information processing method, apparatus and system
CN111191804A (en) * 2018-11-15 2020-05-22 北京京东尚科信息技术有限公司 Method, system, device and storage medium for generating restaurant service task information
WO2020103647A1 (en) * 2018-11-19 2020-05-28 腾讯科技(深圳)有限公司 Object key point positioning method and apparatus, image processing method and apparatus, and storage medium
CN111476271A (en) * 2020-03-10 2020-07-31 杭州易现先进科技有限公司 Icon identification method, device, system, computer equipment and storage medium
CN113591826A (en) * 2021-10-08 2021-11-02 长沙鹏阳信息技术有限公司 Dining table cleaning intelligent reminding method based on computer vision

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512970A (en) * 2016-01-05 2016-04-20 江苏木盟智能科技有限公司 Dining table service system
US20190213438A1 (en) * 2018-01-05 2019-07-11 Irobot Corporation Mobile Cleaning Robot Artificial Intelligence for Situational Awareness
CN110211000A (en) * 2018-02-28 2019-09-06 阿里巴巴集团控股有限公司 Table state information processing method, apparatus and system
CN111191804A (en) * 2018-11-15 2020-05-22 北京京东尚科信息技术有限公司 Method, system, device and storage medium for generating restaurant service task information
WO2020103647A1 (en) * 2018-11-19 2020-05-28 腾讯科技(深圳)有限公司 Object key point positioning method and apparatus, image processing method and apparatus, and storage medium
CN109858949A (en) * 2018-12-26 2019-06-07 秒针信息技术有限公司 A kind of customer satisfaction appraisal procedure and assessment system based on monitoring camera
CN111476271A (en) * 2020-03-10 2020-07-31 杭州易现先进科技有限公司 Icon identification method, device, system, computer equipment and storage medium
CN113591826A (en) * 2021-10-08 2021-11-02 长沙鹏阳信息技术有限公司 Dining table cleaning intelligent reminding method based on computer vision

Also Published As

Publication number Publication date
CN114067369B (en) 2022-05-24

Similar Documents

Publication Publication Date Title
Anderson et al. Recognizing falls from silhouettes
JP4304337B2 (en) Interface device
TW200820099A (en) Target moving object tracking device
US9047505B2 (en) Collating device
JP6573311B2 (en) Face recognition system, face recognition server, and face recognition method
US20040036712A1 (en) Queue management system and method
CN114067369B (en) Dining table state identification method and system based on image identification
CN116051315B (en) Intelligent hotel management system
CN111281274A (en) Visual floor sweeping method and system
CN113591826B (en) Dining table cleaning intelligent reminding method based on computer vision
CN111027385A (en) Clustering visitor counting method, system, equipment and computer readable storage medium
CN116720899B (en) Super-intelligent business monitoring management method, device, electronic equipment and medium
CN115761639B (en) Seat information intelligent analysis and recommendation method
JP2021040604A (en) Tick image processing device, tick image processing method, program and tick image processing system
CN113537073A (en) Method and system for accurately processing special events in business hall
CN114792368A (en) Method and system for intelligently judging store compliance
JP2003281157A (en) Person retrieval system, person tracing system, person retrieval method and person tracing method
CN111062269B (en) User state identification method and device, storage medium and air conditioner
CN111738681A (en) Intelligent disinfection behavior judgment system and method based on deep learning and intelligent socket
JP6920944B2 (en) Object detector
CN110796013A (en) Resident population information collection method and system
JP7413278B2 (en) Information processing method, information processing device, and program
JP2019128799A (en) Target person detection device
EP4310760A1 (en) Display control program, display control method, and information processing apparatus
CN113743339B (en) Indoor falling detection method and system based on scene recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant