WO2023034521A1 - Kitchen system with food preparation station - Google Patents

Kitchen system with food preparation station Download PDF

Info

Publication number
WO2023034521A1
WO2023034521A1 PCT/US2022/042372 US2022042372W WO2023034521A1 WO 2023034521 A1 WO2023034521 A1 WO 2023034521A1 US 2022042372 W US2022042372 W US 2022042372W WO 2023034521 A1 WO2023034521 A1 WO 2023034521A1
Authority
WO
WIPO (PCT)
Prior art keywords
food
pizza
database
ingredient
food preparation
Prior art date
Application number
PCT/US2022/042372
Other languages
French (fr)
Inventor
Jae Won Lim
Beom-Jin Lee
Original Assignee
GOPIZZA Inc.
Kim, Mincheol
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/464,430 external-priority patent/US11544925B1/en
Priority claimed from US17/464,405 external-priority patent/US20230063320A1/en
Application filed by GOPIZZA Inc., Kim, Mincheol filed Critical GOPIZZA Inc.
Publication of WO2023034521A1 publication Critical patent/WO2023034521A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06316Sequencing of tasks or work
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/12Hotels or restaurants

Definitions

  • Restaurants use food preparation stations in their kitchens.
  • a typical food preparation station has food pans containing food ingredients.
  • Restaurant workers prepare a dish using ingredients from the food pans.
  • a change of ingredient location may confuse restaurant workers.
  • One aspect of the present disclosure provides a method for use in food preparation.
  • the method may comprise one or more of the steps of: providing a food preparation table, a pan array located next to the food preparation table, food pans arranged on the pan array, indicating lights, and at least one camera; providing at least one database storing data relating to predefined zones of the pan array and the indicating lights, wherein a predefined zone is preassigned to at least one indicating lights and further linked thereto such that the at least one database is to be referred to for linkage between a predefined zone and at least one indicating light preassigned thereto; capturing, using the at least one camera, images of the pan array located next to the food preparation table such that the captured images feature food pans arranged on the pan array and ingredients contained therein; processing the captured images to determine a location on the pan array of an ingredient featured on at least part of the captured images such that the ingredient is determined to be in one of the predefined zones of the array; updating the at least one database to link the ingredient to at least one of
  • a first one of the ingredients may be located in a first one of the predefined zones of the pan array
  • at a second time the first ingredient may be located in a second one of the predefined zones of the pan array
  • at a third time between the first time and the second time the first ingredient may be moved to from the first predefined zone to the second predefined zone such that at the first time the first ingredient may be linked to a first indicating light preassigned to the first predefined zone on the at least one database and further such that at the second time the first ingredient may be linked to a second indicating light preassigned to the second predefined zone on the at least one database.
  • the method may further comprise referring to the at least one database to generate a first guidance at the first time and a second guidance at the second time.
  • the first guidance may indicate the first ingredient using the first indicating light as the first ingredient is located in the first predefined zone and is linked to the first indicating light on the at least one database at the first time
  • the second guidance may indicate the first ingredient using the second indicating light as the first ingredient is located in the second predefined zone and is linked to the second indicating light on the at least one database at the second time.
  • processing the captured images may comprise identifying at least part of the ingredients based on color information contained in the at least part of the captured images.
  • the at least one camera may further capture images of the food preparation table and food being prepared thereon, and the method may further comprise determining completion of a food preparation step based on the captured images of the food being prepared on the food preparation table and further based on a predetermined completion criterion for the food preparation step.
  • the first guidance may be for a step to prepare a first food item
  • the second guidance may be for a step to prepare another food item, for a later step to prepare the first food item, or for the same step to prepare the first food item that is run at a later time.
  • the at least one database may further store a first recipe comprising a sauce step for spreading sauce on a pizza dough placed on the food preparation table
  • the method may further comprise one or more of the steps of: capturing, using the at least one camera, images of pizza preparation on the food preparation table performed by a person, and determining whether the sauce step is completed based on at least part of the captured images of pizza preparation.
  • determining completion of the sauce step may comprise one or more of the steps of: processing an image of pizza preparation captured during the sauce step to identify a first group of pixels, each of which is located within an outer boundary of the pizza dough, obtaining a 2-dimensional area of the pizza dough based on a count of pixels of the first group, processing the image of pizza preparation or its modified version to identify a second group of pixels, each of which belongs to a sauce area where the sauce is applied over the pizza dough, obtaining a 2-dimensional size of the sauce area based on a count of pixels of the second group, and computing a percentage of the 2-dimensional size of the sauce area with reference to the 2-dimensional area of the pizza dough.
  • the at least one database may further store a first recipe comprising a cheese step for adding cheese over a pizza dough placed on the food preparation table
  • the method may further comprise one or more of the steps of: capturing, using the at least one camera, images of pizza preparation on the food preparation table performed by a person, and determining whether the cheese step is completed based on at least part of the captured images of pizza preparation.
  • determining completion of the cheese step may comprise one or more of the steps of: overlaying a grid pattern on an 2-dimensional area of the pizza dough in an image of pizza preparation captured during the cheese step, for each grid unit of the grid pattern, determining if the cheese occupies the grid unit based on color information of the grid unit, and counting the number of grid units occupied by the cheese.
  • a representative color may be computed, and the representative color may be compared against a predetermined color value to determine if the cheese occupies the grid unit.
  • the representative color may be an average of pixel color values of pixels within each grid unit.
  • the at least one database may further store a first recipe comprising a pepperoni step for adding cheese over a pizza dough placed on the food preparation table
  • the method may further comprise one or more of the steps of: capturing, using the at least one camera, images of pizza preparation on the food preparation table performed by a person, and determining whether the cheese step is completed based on at least part of the captured images of pizza preparation.
  • determining completion of the pepperoni step may comprise one or more of the steps of: processing an image of pizza preparation captured during the pepperoni step to compute a count of pepperoni slices placed over the pizza dough, and determining completion of the pepperoni step when the computed count of pepperoni slices is equal to or greater than a predetermined number.
  • the food preparation system may comprise one or more of: a food preparation table; a pan array located next to the food preparation table; food pans arranged on the pan array; indicating lights configured to indicate predefined zones of the pan array; and at least one camera configured to capture images of the pan array.
  • the food preparation system may further comprise one or more of: at least one database storing data relating to predefined zones of the pan array and the indicating lights, wherein a predefined zone is preassigned to at least one indicating lights and further linked thereto such that the at least one database is to be referred to for linkage between a predefined zone and at least one indicating light preassigned thereto; and a computing system configured to operate the food preparation system.
  • the computing system may be configured to cause the food preparation system to perform one or more of the following actions: capturing, using the at least one camera, images of the pan array located next to the food preparation table such that the captured images feature food pans arranged on the pan array and ingredients contained therein; processing the captured images to determine a location on the pan array of an ingredient featured on at least part of the captured images such that the ingredient is determined to be in one of the predefined zones of the array; updating the at least one database to link the ingredient to at least one of the indicating lights that is preassigned to the determined one of predefined zones on the at least one database; repeating capturing images of the pan array, processing the captured images and updating the at least one database; and generating guidances for a person working at the food preparation table with reference to the at least one database.
  • the computing system may be configured to cause the food preparation system to update the at least one database such that at the first time the first ingredient is linked to a first indicating light preassigned to the first predefined zone on the at least one database, and at the second time the first ingredient is linked to a second indicating light preassigned to the second predefined zone on the at least one database, and further such that a first one of the guidances indicates the first ingredient using the first indicating light as the first ingredient is located in the first predefined zone and is linked to the first indicating light on the at least one database at the first time, and a second of the guidances indicates the first ingredient using the second indicating
  • the computing system may be further configured to determine color information contained in the at least part of the captured images and to identify at least part of the ingredients based on the color information.
  • the at least one camera may be further configured to capture images of the food preparation table and food being prepared thereon, and the computing system may be configured to determine completion of a food preparation step based on the captured images of the food being prepared on the food preparation table and further based on a predetermined completion criterion for the food preparation step.
  • the at least one database may store a recipe comprising a sauce step for spreading sauce on a pizza dough placed on the food preparation table
  • the computing system may be configured to perform one or more of the following actions: processing an image of pizza preparation captured during the sauce step to identify a first group of pixels, each of which is located within an outer boundary of the pizza dough; processing the image of pizza preparation or its modified version to identify a second group of pixels, each of which belongs to a sauce area where the sauce is applied over the pizza dough; and determining completion of the sauce step based on a count of pixels of the first group and further based on a count of pixels of the second group.
  • the at least one database may store a recipe comprising a cheese step for adding cheese over a pizza dough
  • the computing system may be configured to perform one or more of the following actions: overlaying, on an image of pizza preparation captured during the cheese step, a grid pattern comprising a plurality of unit grids; identifying a first group of unit grids, each of which is located within an outer boundary of the pizza dough; identifying a second group of unit grids, each of which belongs to a cheese area where cheese is applied over the pizza dough; and determine completion of the cheese step based on a count of unit grids of the first group and further based on a count of unit grids of the second group.
  • the at least one database may store a recipe comprising a pepperoni step for placing pepperoni slices over a pizza dough
  • the computing system may be configured to perform one or more of the following actions: processing an image of pizza preparation captured during the pepperoni step to compute a count of pepperoni slices placed over the pizza dough; and determining completion of the pepperoni step when the computed count of pepperoni slices is equal to or greater than a predetermined number.
  • images of the pizza preparation may comprise at least one image containing the person’s hand overlaying at least part of a pizza dough, and the at least one image containing the person’s hand may not be used to determine completion of the sauce step, the cheese step, and the pepperoni step.
  • Figure l is a flow chart for preparing a pizza according to an implementation.
  • Figure 2A illustrates a kitchen system according to an implementation.
  • Figure 2B is a side view of the station of Figure 2A.
  • Figure 3 illustrates a food pan array viewed from the top according to an implementation.
  • Figure 4A is a photograph of an example food preparation station according to an implementation.
  • Figure 4B is a photograph showing a food pan array of the example station of Figure 4A.
  • Figure 4C shows a camera system of the example station of Figure 4A.
  • Figure 4D shows a light indicator of the example station of Figure 4A.
  • Figure 5 is a flow chart of overall process of providing food preparation guide to a person according to an implementation.
  • Figure 6A illustrates data of a recipe according to an implementation.
  • Figure 6B illustrates data of food preparation history according to an implementation.
  • Figure 6C illustrates data of a person according to an implementation.
  • Figure 7 is a flowchart of determining and storing locations of food ingredients according to an implementation.
  • Figure 8 illustrates data of food ingredients and their locations according to an implementation.
  • Figure 9 is a flowchart of providing a step-by-step food preparation guidance according to an implementation.
  • Figure 10 is a flowchart of providing guidance for an individual step of a recipe according to an implementation.
  • Figure 11 is a flowchart of determining progress of a recipe step according to an implementation.
  • Figures 12A is an example screen for a dough preparation step according to an implementation.
  • Figures 12B is a photograph of a pizza dough being prepared according to an implementation.
  • Figures 13 A illustrates a screen for a sauce adding step according to an implementation.
  • Figures 13B is a photograph of a sauce adding step according to an implementation.
  • Figures 14A is an example screen for a cheese adding step according to an implementation.
  • Figures 14B is a photograph of a cheese adding step according to an implementation.
  • Figures 14C is another photograph of a cheese adding step according to an implementation.
  • Figures 15A is an example screen for a topping adding step according to an implementation.
  • Figures 15B is a photograph of a topping adding step according to an implementation.
  • Figure 16 illustrates a screen for a topping adding step according to an implementation.
  • Figure 17 illustrates a screen notifying a completed food preparation according to an implementation.
  • Figure 18 is an example screen to provide performance feedback according to an implementation.
  • Figure 19 illustrates one or more computing systems for use with one or more implementations.
  • a typical food preparation station has a food preparation table and food pans containing food ingredients.
  • Restaurant workers prepare a food on the food preparation table using ingredients from the food pans.
  • the station may be provided with indication lights for indicating food pans. To help workers locate ingredients quickly, the station may turn on an indicating light to indicate a food pan containing a particular ingredient to be used at a particular step of the instructions. Sometimes, however, the food pan indicated with the indicating light may contain another ingredient, which may confuse workers. Tracking Changes of Ingredient Location
  • An enhanced food preparation station may be associated with a system that tracks location changes of the food pans or ingredients contained in the food pans.
  • the system may have the very current location of each ingredient contained in each food pan. Then, the system can use the accurate location of each ingredient from the system and turn on the indicating light(s) for indicating the correct ingredient to be used at each step of the instructions.
  • the configuration and operation of an enhanced food preparation station will be described with reference to an example recipe.
  • FIG. 1 illustrates a flow chart for preparing a pepperoni pizza on a food preparation station before the pizza is baked in a pizza oven or furnace.
  • Step 1 is preparing a dough, which is followed by Step 2 for adding sauce on the dough.
  • Step 3 cheese is added over the sauce, which is followed by Step 4 for adding pepperoni over cheese.
  • a flow of preparing a pizza includes steps of sequentially stacking a food ingredient over a pizza dough. While a pepperoni pizza recipe is discussed herein, the station can guide a person to prepare different pizzas and various dishes other than pizzas.
  • Figure 2A illustrates a kitchen system according to an implementation.
  • Figure 2B illustrates a side view of the station of Figure 2A.
  • Figure 3 illustrates a food pan array viewed from the top.
  • the food preparation station 100 of Figure 2A includes a food preparation table 110 and a food pan array 120.
  • the station 100 further includes a display 130, light indicators 140, at least one camera 150, a computing system 160, a database 170, and an ID card reader 180.
  • Figure 4A to Figure 4D are photographs of an example food preparation station 4100.
  • the food preparation table 110 provides a working surface on which food is prepared.
  • Figure 2B shows a person 210 preparing a pizza 220 on the table 110.
  • the table 110 is adjacent to the food pan array 120 such that the person 210 can pick up food ingredients from the array 120 without having to step toward the array 120.
  • the station of Figure 4A has a food preparation table 4120 with two pizzas 4121, 4122 being prepared.
  • the table 4120 is sized such that two persons can work at the same time.
  • a food pan array is for temporarily storing food ingredients.
  • the food pan array 120 of Figure 3 includes a frame 310 and a plurality of food pans 320 placed on the frame 310.
  • Figure 4B shows another food pan array 4110.
  • the food pans 320 are arranged in 6 columns and 2 rows.
  • a food pan array may have a different arrangement from the examples.
  • each one of the food pans 320 is a container for storing one or more food ingredients.
  • the pans may be in the same size or different sizes.
  • the pans may be in the same shape or different shapes.
  • a food pan may be used with or without a lid or cover.
  • Figure 4B example food pans 4420 containing ingredients to prepare pizzas.
  • the frame may have a rail structure on which one or more food pans are placed.
  • the food pan array 4110 have two elongated bars (rails) 4410 on which food pans 4420 are placed in a row.
  • Each food pan has a flange to be slidably placed on the two elongated rails such that each food pan can slide along the rails 4410 and change its location in the array 4110.
  • the frame may include a plurality of recesses (or holes), each of which is to receive one or more food pans. One or more food pans can be placed into each recess.
  • a frame may have a structure different from the examples for holding one or more food pans.
  • Light indicators are used to visually indicate locations of food ingredients.
  • a light indicator 141 is provided above a pepperoni pan 321.
  • the indicator 141 may be selectively turned on to draw the person’s attention to the pan 321 and to indicate location of pepperoni while the other light indicators are not turned on.
  • the indicator 141 may be turned off while all the other light indicators are turned on.
  • the light indicators 140 are installed on the frame 310.
  • one or more lights may be attached to a pan of the array 120 such that the lights are visible to the person 210.
  • a lighting device such as a spotlight installed over the station may highlight a particular food pan to indicate ingredient contained therein.
  • Light indicators may be arranged according to a predetermined layout from which the person 210 can recognize which pan is associated which light and will pay attention to a particular pan when an indicator is on.
  • a series of light indicators 142 are installed along an upper edge of the frame 310 and above Row 2 of food pans.
  • the light indicators 142 are sized and arranged such that each indicator is positioned right above its corresponding food pan of Row 2. From the arrangement, the person 210 recognizes that the indicator 141 is associated with the pepperoni pan 321 as it is the closest to the pan 321, and will pay attention to the pepperoni pan 321 when the indicator 141 is on.
  • a light strip 144 is installed along a lower edge of the frame 310 and under Row 1 of food pans, and a group of six lights 146 is right under the sauce pan 323. Turning on the six lights 146 would suggest the person 210 to pay attention to the sauce pan 323 rather than other pans because the sauce pan 323 is the closest pan right above the lights 146.
  • a light indicator 4140 includes two LED light strips 4141, 4142 installed above a food pan 4421.
  • the two strips 4141, 4142 may operate together or independently to draw a person’s attention to the pan 4421.
  • the lower strip 4141 may be turned when the pan’s ingredient is needed for the left pizza 4121, and the upper strip 4142 may be turned on when the pan’s ingredient is needed for the right pizza 4122 although not limited thereto.
  • the system may have location information for each indicator and also have information of which indicator is associated which ingredient.
  • the system stores the location of the ingredient in connection with one or more light indicators that has positional association with the ingredient as exemplified in Figure 8.
  • the system may locate one or more light indicators to turn on based on link between the ingredient and the one or more light indicators on the database.
  • a light indicator may stay turned-on, flashes, or change its color and brightness to indicate location of its corresponding food ingredient or to indicate a status of the food ingredient.
  • the light indicator may operate in a way different from the example to draw the person’ s attention. Di splay
  • the display 130 is for displaying food preparation information for the person 210 working at the station 100.
  • the display 130 may display one or more of a received order, instructions to prepare an ordered pizza, the current progress of pizza preparation, and a performance feedback after the pizza is prepared.
  • the display 130 may be placed over the food pan array 100 although not limited thereto.
  • the display 130 may be installed next to the table such that the person can see the pizza 200 and the display 130 at the same time.
  • the display 130 is facing the person 210 such that the person can read information on the display while preparing the pizza 220 on the table 110.
  • a food preparation may use two or more displays.
  • the station 4100 has two independent displays 4131, 4132.
  • the left display 4131 may provide guidance for a first person to prepare the left pizza 4121
  • the right display 4132 may provide guidance for a second person to prepare the right pizza 4122 although not limited thereto.
  • the system includes one or more cameras 150 for capturing images of the table 110 and the array 120.
  • a camera 152 is installed for monitoring food ingredients in the pans 320
  • another camera 151 is installed for monitoring the pizza 220 being prepared on the table 110.
  • a single camera may monitor both of the table 110 and the food pan 320.
  • a camera 4151 is provided for monitoring food preparation on the table 4120 and another camera 4152 is provided for monitoring food ingredients in the array 4110.
  • the camera of Figure 2B is installed over the food pan array 120 and the display 130 to not interfere the person’s sight or action.
  • the two cameras 4150 are installed over the displays 4131, 4132 and the food pan array 4110.
  • a camera system may be at a location different from the examples.
  • the station 100 includes a device other than a camera to monitor food ingredients or the pizza 220 being prepared.
  • a device other than a camera to monitor food ingredients or the pizza 220 being prepared.
  • one or more thermometers may monitor temperature of each food ingredient or the pizza.
  • a weight measurement system can be used to measure the weight of the pizza 220 or a food ingredient contained in a food pan.
  • a laser scanner or a light detection and ranging (LIDAR) device may be used for measuring a thickness of a food ingredient (e.g., pizza dough, cheese over the pizza dough) or for measuring location and distribution of an ingredient on the pizza 220.
  • a device other than the examples may be used.
  • the computing system 160 is for process information relating to operation of the station 100.
  • the computing system 160 is connected to the display 130, the light indicators 140, the camera 150, the database 170 and the ID card reader 180.
  • the computing system 160 may communicate with a device outside the station 100.
  • the computing system 160 can be outside a kitchen where the food preparation table 110 is located, and communicates with other devices of the station 100 via a communication network.
  • the computing system 160 communicates with another computing system to obtain information of an order for a pizza.
  • the computing system 160 can use computing power of another system (e.g., cloud computing).
  • An example architecture of one or more computers systems for use with one or more implementations will be described in detail with reference to Figure 19.
  • the database 170 is for storing data for providing food preparation guidance.
  • the database 170 may be one or more of a local data store of the computing system 160 and a remote data store connected to the computing system 160 via a communication network.
  • the database 170 may store a plurality of recipes that may be prepared at the station, profiles of worker or person, and history of food preparation works done at the station 100. For each recipe, the database 170 may store information of necessary ingredients, and locations of the ingredients. For each worker or person, the database 170 may store a skill level for each pizza and history of food preparation works.
  • the database 170 may store additional data other than the example, and may not store one or more of the examples. Data stored on the database 170 will be described in detail with reference to other drawings.
  • the ID card reader 180 is for check-in and check-out of the person 210 at the station 100.
  • the station may include 100 includes one or more of an ID card reader, a keypad, and a face recognition system.
  • the station 100 may include a device other than the example devices.
  • Figure 4A shows two ID card readers 4181, 4182 installed on a frame of the array 4110.
  • Figure 5 is a flow chart for providing guidance to prepare food, here a pizza.
  • the system may retrieve data of a worker or person, retrieve recipe data of the ordered pizza, and provide guidance according to the retrieved recipe data.
  • the computing system 160 may locate the person’s profile on the database 170.
  • the computing system may load data of the located profile on its local memory, or may use data already stored on its local memory without newly retrieving data from the database 170.
  • An example profile of a worker will be discussed with reference to Figure 6C. This step is optional and may be omitted.
  • the computing system 160 In response to an order for the pizza 220 or upon initiation, the computing system 160 locates the pizza’s recipe on the database 170 and loads data of the recipe on a local memory. This step S520 may precede the step of retrieving worker data S510. The two steps S510, S520 may be performed in parallel. In an implementation, the computing system 160 uses data stored on its local memory without newly retrieving recipe data from the database 170.
  • An example recipe pepperoni pizza
  • the system may provide a food preparation guidance to the person 210.
  • the system may display a text instruction on the display 130, play an audio or video guide, and turn on a light indicator to notify location of a pizza ingredient.
  • the system may provide different instructions based on the person’ s experience level or work history related to the current recipe. Example data for use in providing food preparation guidance will be described in detail with reference to Figure 6A to Figure 6C.
  • Figure 6A shows data of an example recipe stored on the database 170.
  • Figure 6B show an example food preparation history.
  • Figure 6C shows example data of a worker (a station user).
  • the database stores, for each recipe, recipe name 610, step number 620, instruction 630, ingredient 640 and step completion requirement 650.
  • the database stores a log of completed orders.
  • the database stores an order number 681, a recipe name 610, a Worker ID 670, Time of Order Received 682, Time of Order Completed 683, and Preparation Speed Rating 684.
  • the database stores profiles of workers.
  • the database stores a worker ID 670, one or more recipes 610, a preparation time rating 681, and a preparation quality rating 682, and an experience level 680.
  • the database stores data in a way different from the example of Figure 6A to Figure 6C.
  • the database 170 may store additional data different from the example, and may not store one or more of the example data.
  • the recipe name 610 is for uniquely identifying each recipe on the database 170.
  • a corresponding recipe 600 can be located using the recipe’s name 610.
  • information other than the name of pizza may be used.
  • a predetermined code of a pizza may be used for delivering order information to the computing system 160, and the computing system 160 locates a corresponding recipe using the predetermined code.
  • the example recipe 600 of ‘pepperoni pizza’ has four steps in total. Each step is numbered according to its order in the recipe, from Step 1 to Step 4. A recipe may have steps fewer or more than four.
  • the database 170 may store the step order in a way different from the example of Figure 6A.
  • the database may store one or more instructions to help the person 210 during each of the recipe steps.
  • the instructions may include one or more of a text message, an audio message and a video guide predetermined for the recipe step.
  • the system may locate a first message 631 linked to Step 1 and deliver the first message to the restaurant worker.
  • the first message 631 includes a text instruction “Prepare a 10-inch dough”
  • the second message 632 includes a text instruction “Place sauce on 3/4 of dough”
  • the third message 633 includes a text instruction “Place cheese to cover 90% of sauce”
  • the message 534 includes a text instruction “Place 12 slices of pepperoni”.
  • the database stores an audio or video instruction for a recipe step, and the system plays the audio/video instruction at the beginning or during the recipe step. For example, when Step 1 is completed, the system delivers a voice instruction saying “Place sauce on 3/4 of dough” for Step 2. For another example, during Step 2, the system may play a video guide showing how to apply sauce repeatedly on the display 130. Selective Instructions Based on Monitoring of Food Preparation
  • the system may provide one or more instructions selectively based on monitoring of the pizza 220.
  • the system may select one or more instructions among a set of predetermined instructions based on one or more features identified from monitoring of the pizza being prepared.
  • the system may generate a new instruction that is suitable for the current status of the pizza 220. For example, during Step 2 (adding sauce), the system may request to add more sauce when it is determined the amount of added sauce is not sufficient to complete Step 2.
  • Step 1 for preparing a dough is linked to ‘dough’
  • Step 2 for adding sauce is linked to ‘sauce’.
  • no ingredient may be linked to a recipe step when the step does not involve addition or removal of an ingredient.
  • the database 170 stores one or more requirements to determine whether the step is completed.
  • the requirements may include one or more of (1) a desirable amount or count of an ingredient to be added (or removed) during the current step, (2) a size of an ingredient on the pizza 220, (3) a shape of the ingredient, (4) a desirable position of the ingredient, (5) distribution of the ingredient, (6) distance between individual pieces of the ingredient, (7) a temperature of the pizza 220, (8) a predetermined time limit of the current step, and (9) a quality or status of the ingredient (e.g., freshness, frozen, melt, chopped, deformation).
  • the system may determine that Step 4 (adding pepperoni) is completed when at least 12 slices of pepperoni (each sized greater than a predetermined minimum size) are added on the pizza 220.
  • Step 4 adding pepperoni
  • a requirement different from the examples may be used to determine a completed step.
  • the system may evaluate the quality pizza preparation for each of the recipe step. To evaluate the preparation quality, the system may consider one or more features discussed above for determining step completion. In an implementation, the system may evaluate a recipe step using one or more criteria different the step completion requirements. For example, the system may compute a rating for Step 4 (adding pepperoni) based distribution of pepperoni slices on the pizza 220 when completion of Step 4 is be determined based on the count of the pepperoni slices. In an implementation, the database 170 may store one or more criteria to evaluate a preparation quality of the pizza 220 for each recipe step.
  • the database 170 may stores records of orders prepared (or bring prepared) at the station 100. As shown in Figure 6B, the database 170 may store, for each order, one or more of an order number 681 uniquely identifying the order, the name of ordered pizza 610, an identification 670 of a person who prepared the ordered pizza, a time when the order is received 682, a time when the ordered pizza is prepared 683, and a speed rating of pizza preparation work 684. In an implementation, the database 170 may store a data different from the examples of Figure 4. In an implementation, the database 170 may store pizza orders prepared at a station other the station 100
  • the database 170 may stores a worker ID that is uniquely identifying a worker on the database.
  • the computing system may obtain the person’s ID (HKL) and locate data of the person on the database.
  • a worker ID is linked with orders 681 the worker prepared such that the worker’s performance or experience level may be determined based on the person’s order history.
  • the system may compute, for each completed order, a rating that represents how fast the ordered pizza had been prepared.
  • the system may compute a preparation time of the ordered pizza using the order received time 682 and the pizza completion time 683, and compares it with a predetermined desirable preparation time for the ordered pizza to determine the speed rating 684.
  • the system may measure the preparation time of the pizza from the start of the first recipe step on the table. In an implementation, the system may measure a completion time and evaluate preparation speed for each recipe step.
  • the database 170 stores a profile for each worker of the station 100.
  • the database 170 may store one or more of a Worker ID 670, recipe names 610 of pizzas the worker prepared, a preparation speed rating 684 representing the worker’s pizza preparation speed, and a preparation quality rating 685 representing the worker’s work quality, and an experience level 690 of the worker.
  • the database 170 may store data different from the examples.
  • the system may compute a preparation quality rating representing how properly the worker prepared pizzas in accordance with their predetermined recipes and quality standards. For example, for each recipe of pizzas a worker prepared, the system may evaluate preparation quality for each individual step of the recipe, and compute a percentage of steps satisfying a predetermined quality standard.
  • the preparation quality rating 685 can be determined in a way different from the example.
  • the database 170 may store an experience level for each recipe linked to the worker ID 670.
  • the experience level for a recipe may be determined based on one or more of the number of pizzas the worker prepared using the recipe, the worker’s preparation time rating 684, and the worker’s preparation quality rating 685.
  • the experience level may be determined considering another factor different from the examples.
  • the system may consider the profile of the person 210 preparing the pizza 220 at the station 100.
  • the system may provide different instructions based on one or more of the person’s experience level 690 and the ratings 684, 685 about the ordered pizza (its recipe). For example, the system may provide no or limited guidance when the worker is well experienced about the ordered pizza, and may provide a more detailed guidance when the worker has a lower level of experience about the ordered pizza.
  • the kitchen system indicates the location of an ingredient within the pan array while food is being prepared. To inform the location, the system needs to have the current location of the necessary ingredient, and the specific light indicator associated with the current location of the ingredient. The system performs a process to keep data current for notifying the locations of food ingredients within the pan array.
  • Figure 7 is an example process to update locations of food ingredients.
  • the process includes capturing images of the food pan array (S710), processing captured images to determine the location of each food ingredient (S720), determining one or more indicators associated with the location of each food ingredient (S730), storing association between food ingredients and light indicators on the database 170 (S740).
  • At least one camera captures images of the array 120.
  • the images of the array 120 may be captured continuously, periodically or intermittently.
  • the captured images are then sent to the computing system 160 (or another computing device) for further processing.
  • the camera 150 may acquire a video of the array 120 continuously, and send at least part of the video frames to the computing system of another computing device.
  • the computing system 160 may process one or more images of the array 120 to identify food pans and food ingredients.
  • the computing system 160 with appropriate software processes one or more images to locate each food pan in the images.
  • the computing system 160 may perform image segmentation of camera image(s) using a machine-trained model, and identify one or more food pans (or food ingredients) corresponding to segment(s) in the camera images(s).
  • the computing system may compute one or more features (e.g., color, shape, and size, volume) of its contained material, and determine that a particular ingredient is contained in the pan when the computed feature(s) match the ingredient’s feature(s) stored on the database.
  • the system may identify food pans or food ingredients using an approach different from the examples.
  • the computing system 160 determines location of each food pan (or food ingredient) identified from processing of the images of the array 120.
  • the computing system 160 may process the images of the array 120 to determine a reference (e.g., a corner point, a center point) for each pan and to compute a coordinate of the pan’s reference point from a reference point of the frame 310 (e.g., a corner point, a center point).
  • the computing system 160 may store the computed coordinate on the database 170 as the location of the pan’s food ingredient.
  • the system may store the location of the pepperoni pan 321 as Row 2, Column 2 as shown in Figure 8.
  • the system may determine one or more indicators that will draw attention to a particular food pan based on positional relationship between the indicator and the ingredient.
  • the light indicators 142, 144 are installed on the frame according to a predetermined layout.
  • the location of the pepperoni pan 321 (Row 2, Column 2) is determined from processing of camera images.
  • the system may assign the indicator 141 to the pan 321 as no other indicator is closer to the pan 321 and no other pan is closer to the indicator 141.
  • the system may associate an indicator with a pan when they are within a predetermined distance from each other although not limited thereto.
  • the system may use a map of food pan array that defines one or more indicator assignment zones.
  • the system For each zone of the food pan array, the system assigns at least one light indicator based on positional association between the zone and the indicator such that turning on the indicator would draw the person’s attention to the zone.
  • the system associates or links, on the database, the ingredient (or the pan) to the indicator assigned to the zone such that the indicator may be turned on to indicated location of the ingredient.
  • the system may store on the database 170 information of which light indicator is associated with which food ingredient. Each food ingredient may be linked to at least one light indicator on the database.
  • Each food ingredient may be linked to at least one light indicator on the database.
  • cheese is linked to the location of the cheese pan 324 (Row 1, Column 3) which is linked to the light group 147, and accordingly cheese is linked to the light group 147. Based on this association between cheese and the light group 147, the system may operate the light group 147 to indicate the location of cheese in the array 120.
  • the system may perform the process of Figure 7 continuously, periodically or intermittently to maintain the database 170 current and to reflect a pan location without delay.
  • the system may perform the process independent of providing step-by-step instructions for the pizza 220.
  • the system may perform the process while it is providing instructions to prepare the pizza 220 such that the system can update the database real-time in response to a pan location change during the preparation of the pizza.
  • the system may perform the process during a waiting time after completing a pizza such that a pan location change is reflected on the database before preparing another pizza.
  • location of a food pan may be moved in the food pan array 120 after refilling the food pan.
  • the person 210 refills the sauce pan 323 and the cheese pan 324 after preparing a first pizza
  • the person 210 by mistake may switch locations of the two pans.
  • the system updates the database such that the sauce pan 323 is linked to the light 147 and the cheese pan is linked to the light 146.
  • the system may turn on the light 147 when sauce is need for the second pizza while it turned on the light 146 when sauce was need for the first pizza.
  • the computing system 160 may processes one or more images from the camera 150 to monitor amount (for example, volume) of each food ingredient.
  • the system may determine whether there are enough ingredients in the food pans considering one or more of a received order, an expected order, and a predetermined amount. When it is determined that a food pan does not store enough food ingredient, the system may provide an instruction to refill the food pan.
  • the system may use a weight sensor, a LIDAR system, or another sensor other than the camera system for monitor amount of a food ingredient.
  • Figure 9 is a flowchart of providing a step-by-step food preparation guidance based on the example recipe 600.
  • the system may provide guidance for each step sequentially from the first step (Step 1) to the fourth step (Step 4). Operation of the system for each step will be described in detail referencing to other drawings.
  • Figure 10 is a flowchart of providing guidance for an individual step of a recipe according to an implementation.
  • the process may include providing one or more instructions of the current step (S1010), indicating location of an ingredient necessary for the current step (SI 020), and determining if the current step is completed based on monitoring of the pizza 220 being prepared (S1030).
  • the process of Figure 10 will be explained below using the example recipe 600.
  • the system may locate one or more instructions 630 linked to the current step on the database 170, and provide the instructions to the person 210 working at the station 100.
  • the system may retrieve the message 631 linked to Step 1 from the database 170, and control the display 130 to present the retrieved message.
  • the text instruction “Prepare a 10-inch dough” is presented on the display 130 for Step 1.
  • the system may locate, on the database 170, one or more light indicators linked to an ingredient necessary for the current step. To indicate the location of the necessary ingredient, the system may turn on the one or more light indicators, and turn off other indicators that are not linked to the necessary ingredient. For example, for Step 4 (adding cheese), the system refers to the database 170 shown in Figure 8 to locate the light group 146 that is linked to ‘cheese’. Then, the system may turn on the segment 146 of the light strip to indicate location of cheese in the food pan array 100.
  • the system may determine whether the current step is completed to move on to the next step.
  • the system may locate one or more completion requirements 650 of the current step from the database of Figure 6A, and may determine the current step is completed when the requirements are satisfied.
  • the completion requirement for Step 4 is to add at least ‘twelve’ slices of pepperoni.
  • the system may process one or more images of the pizza being prepared, count pepperoni placed, and determine that Step 4 is completed when the count reaches twelve. An example process for determining step completion will be described in more detail referencing to Figure 11.
  • the system when it is determined that the current step is completed, the system turns off indicator lights activated for the current step, and proceeds to provide guidance for the next step of the recipe.
  • the system may provide a notification that the current step is completed.
  • the system when it is determined that the last step is completed, the system provides a notification that the pizza is ready for serving to a customer or ready for a further processing.
  • An example screen of Figure 17 shows a notification that all steps at the station 100 are completed and the pizza 220 is ready to bake.
  • FIG 11 shows a flowchart of determining completion of a recipe step based on monitoring of a pizza being prepared.
  • the process may include capturing images of the pizza 220 being prepared (SI 110), processing the images to identify one or more ingredients on the pizza 220 (SI 120), computing a progress index of the current step (SI 130), determining whether the current step is completed (SI 140), and repeating the steps (from SI 110 to SI 140) when the current step is not completed.
  • One or more cameras may be used to monitor a dish being prepared.
  • the camera 151 may, periodically or intermittently, capture images of the pizza 220 and send the images to the computing system 160 or another computer for further processing.
  • the camera 151 may acquire a video of the table 110 continuously, and send one or more frames of the video to a computing device for further processing.
  • the system may process one or more images from the camera 150 to identify one or more food ingredients on the pizza 220 being prepared.
  • the computing system 160 detects an object in an image, determines feature(s) (e.g., color, shape, and size) of the obj ect, and determines a food ingredient when the obj ect’ s feature(s) matches the food ingredient’ s data stored on the database.
  • the computing system 160 may use various algorithms other than the examples for identifying food ingredients.
  • the computing system 160 uses a machine-trained model for identifying food ingredient(s) from the camera image(s). For example, the computing system may perform image segmentation of a camera image to find one or more segments each corresponding to an object in the image, to find boundaries separating the segments, and to classify pixels of the images into the segments.
  • the system may process the camera image(s) to determine one or more features for each food ingredient appearing in the camera image(s). For each ingredient, the system may determine one or more of size, count, location and color although not limited thereto. For example, for Step 1 (preparing dough) of the example recipe, the system may compute a size, an area and a color of the dough for use in determining completion of Step 1. For Step 4 (placing 12 slices pepperoni), the system may determine one or more of the number of pepperoni slices added on the pizza 220, the size of each pepperoni slice, and the location of color each pepperoni slice.
  • the system may determine one or more non-visible features not relying on visual of food ingredients in the camera images. For example, the system may obtain one or more of the temperature of the pizza, the weight of the pizza, and time elapsed for the current step although not limited thereto.
  • the system may compute an index (measure) representing progress of the current step using one or more features obtained from monitoring of the pizza 220 being prepared.
  • the progress index may be based one or more of the visible features, one or more of the non-visible features, and combination of thereof.
  • Example progress indices will be discussed in detail with reference to Figure 12A to Figure 16.
  • the system may determine the current step’s completion when the current step’s progress index reaches a predetermined threshold (e.g., 100%).
  • the system may determine the current step’s completion when the completion requirement 650 of the current step is satisfied. Once it is determined that the current step is completed, the system starts to provide guidance for the next step.
  • Figures 12A is an example screen 1200 for Step 1 (dough preparation) of the example recipe 600.
  • Figures 12B is a photograph of an example pizza dough.
  • the screen 1200 presents the pizza’s name 1210, the current step’s number 1220, a text instruction for the current step 631, an image (or a video stream) 1230 of the pizza being prepared, a progress indicator 1240, and time elapsed for the order 1260.
  • Step 1 is to prepare a ‘ 10-inch’ dough.
  • the system may process one or more images of the dough 1250 to compute the dough’s size (e.g., length, diameter, 2-dimensional area).
  • the system may compute progress of Step 1 using the computed dough size.
  • the current progress of 90% is computed as a ratio of the computed dough’s size (9 inches) with the required size (10 inch) for completing Step 1 although not limited thereto.
  • the system may consider one or more of the dough’s shape, 2-dimensional area, thickness, freshness and color to determine progress of Step 1 although not limited thereto.
  • the system may determine completion of Step 1 when the dough’s size satisfies Step l’s predetermined requirement.
  • the system may determine completion of the dough preparation step when the prebaked dough is placed on the table 110. After determining completion of Step 1, the system starts to provide guidance for the next step in the recipe, Step 2.
  • Figures 13A is an example screen 1300 for Step 2 (applying sauce) of the example recipe 600.
  • the screen presents an image 1330 featuring the dough 1250 prepared at Step 1 and sauce 1350 applied over the dough.
  • the screen may also present an instruction 632 for Step 2 and a progress indicator 1340.
  • Figures 13B is a photograph of an example pizza dough with sauce added.
  • Step 2 is to apply sauce over 3/4 of the dough prepared at Step 1.
  • the system may process one or more images of the pizza being prepared to compute a 2-dimensional area of the dough 1250 and a 2-dimensional area of the sauce 1350 placed on the dough. Using the computed areas, the system may compute a ratio of the sauce area to the required area (3/4 of the dough area) as the progress measure of Step 2.
  • the system may compute the dough’s area assuming the dough is in a circular shape and using the diameter of the dough.
  • the system may draw a box 1371 surrounding a dough 1372, and may use the box’s area for computing the progress measure. The system may use a processing different from the examples.
  • the system may process the image 1330 using a machine- trained model to identify a first group (segment) of pixels as the sauced area 1350 and to identify a second group (segment) of pixels as the dough 1250 that is not cover with the dough.
  • the system may compute an area of the sauced area 1350 using the number of pixels in the first group, compute an area of the dough using on the number of pixels in the second group, and compute a ratio between the two areas for evaluating progress of Step 2. If the first group (sauce) is of 600 pixels in the image 1330 and the second group (dough not covered with the sauce) is of 400 pixels, the system may determine that 60% of the dough is covered with the sauce.
  • the system may determine completion of Step 2 when the sauced area 1350 is larger than a predetermined percentage of the 2-dimensional area of the dough.
  • the system may determine completion of Step 2 using a criterion other than the area ratio.
  • Figures 14A is an example screen 1400 for Step 3 (adding cheese) of the example recipe 600.
  • the screen presents an image 1430 featuring the dough 1250 prepared at Step 1, the sauce 1350 applied at Step 2, and cheese 1450 added over the dough.
  • the screen also presents the instruction 633 for Step 3 and a progress indicator 1440.
  • Figures 14B is a photograph of a pizza when cheese is being added.
  • Figures 14C is another photograph showing a cheese adding process.
  • Step 3 is to place cheese to cover 90% of sauce.
  • the system may process one or more images of the pizza being prepared to compute a 2-dimensional area of the sauce 1350 and a 2-dimensional area of cheese added the sauce 1350.
  • the system may compute a ratio of the area of cheese to the area of the sauce as the progress measure 1440 of Step 3. A different process may be used to compute the progress measure.
  • the system may use a grid of virtual segments to determine how much cheese is placed on the sauce 1350.
  • the system overlays the grid 1470 over the sauced area 1350 to virtually partitioning the sauced area into a plurality of sauced segments 1471. For each unit segment, the system determines whether it is covered with cheese or not, counts the number of cheese-covered segments, and computes a ratio of the cheese-covered segments to the entire sauced segments as the current progress 1440 of Step 3.
  • the system In determining a cheese-covered segment, the system identifies a cheese-covered portion inside a segment based on the color of cheese and the color of sauce, and determines the segment is a cheese-covered segment when the cheese-covered portion is greater than a predetermined percentage of the segment area. In an implementation, the system identifies compute a representative color (e.g., average) of the segment, and determine the segment is a cheese-covered segment when the average color is closer to that of the cheese although not limited thereto. In Fig 14B, each of the green boxes 1472 represents a cheese-covered segment. In an implementation, the system may compute a progress index of Step 3 using a process different from the example.
  • the system may process the image 1430 using a machine- trained model to classify a first group (segment) of pixels as cheese, a second group (segment) of pixels as sauce.
  • the system may count the number of pixels for each group in the image 1430 (or its modified version), compute a 2-dimensional area for each group, and determine progress of Step 3 using the pixel counts and the computed areas. For example, if the first group (cheese) is of 300 pixels in the image 1430 and the second group (sauce on the dough) is of 700 pixels, the system may determine that 30% of the sauce is covered with the cheese.
  • the system may determine completion of Step 3 when cheese is placed more than a predetermined percentage of the 2-dimensional area of the pizza dough or a sauced area within the 2-dimensional area (when the computed progress reaches 100%) although not limited thereto. Subsequent to completion of Step 3, the system may provide an instruction to start Step 4.
  • Figures 15A is an example screen for a pepperoni adding step.
  • the screen 1500 presents a current image 1530 featuring the dough 1250, the sauce 1350, and cheese 1450 prepared at Step 3.
  • the screen also presents an instruction 634 for Step 4 and a progress indicator 1540.
  • Figures 15B is a photograph of a pepperoni pizza being prepared.
  • Step 4 is to add 12 slices of pepperoni over the cheese place at Step 3.
  • the system may process a current image of the pizza to identify pepperoni slices and to count pepperoni slices added over the cheese.
  • the current progress of Step 4 (50%) is computed as the ratio of the current number of pepperoni slices (six) to the predetermined number (twelve) although not limited thereto.
  • the system may count a pepperoni slice when it is greater than a predetermined size. The system may not count a pepperoni slice when it does not meet a predetermined requirement for pepperoni.
  • the system may determine completion of Step 4 when the count of pepperoni slices reaches the predetermined number of twelve although not limited thereto. Subsequent to completion of Step 3, the system may provide an instruction to bake the pizza ( Figure 17).
  • Figure 16 shows another example screen 1600 of Step 4 that is subsequent to the screen 1500.
  • a hand 1610 is adding the seventh pepperoni slice 1670 to the pizza of the image 1530 (having 6 pepperoni slices), but only five pepperoni slices are visible in the image 1630.
  • a progress index of Step 4 is computed based on the number of currently visible pepperoni slices, the progress should lower than the 50% shown in Figure 15 A. It may confuse the person 210 if the system lowers the progress index real-time when a hand is obstructing the camera’s view. To avoid such confusion, the system may not update a progress index when the pizza being prepared is not fully visible.
  • the computing system 160 processes a camera image to determine the food being prepared is fully visible in the image, and does not consider the image for computing a progress index or evaluating a food preparation quality when the pizza is not fully visible.
  • the system uses a machine-trained model to compute a progress for a recipe step and to determine completion of the recipe step.
  • the system may train a model such that the model outputs a progress index of a recipe step in response to an input of an image of a pizza being prepared.
  • the system uses a machine-trained model configured to determine completion of Step 3 in response to an image featuring cheese covering a sauced dough.
  • the system may present a screen that the food is ready for serving or for a further processing.
  • Figure 17 is an example screen 1700 notifying that a pizza prepared at the system is ready to bake.
  • Figure 18 is an example screen 1800 provided after completing all four steps of the example recipe.
  • the feedback screen 1800 includes, for each step, (1) a first performance indices 1810 based on preparation time and (2) a second performance indices 1820 based on preparation quality.
  • the system may provide an additional performance index, and may not provide one or more of the example performance indices.
  • the system collects data to evaluate the person’ s performance for each step. For example, the system measures a completion time for each step, compares the measured completion time with a predetermined desirable completion, and computes a performance index representing how fast the worker completed the step. In an implementation, the system updates the person’s preparation time rating 693 using the first performance indices 1810.
  • the system evaluates the step using one or more criteria for determining a properly-performed step. Examples of the criteria were explained in connection with example recipe data.
  • the system computes a performance index representing how evenly the sauce spreads on the dough.
  • the system updates the person’s preparation quality rating 693 using the second performance indices 1820.
  • the computing system 160 uses a machine-trained model for determining location of a food ingredient, and monitoring progress of a recipe step.
  • a machine-trained model of an implementation is configured to, in response to an input of data of a photographic image, output information of one or more food ingredients featured in the photographic image.
  • the system may use a machine-trained model configured to perform image segmentation of a camera image for identifying objects (pans, food ingredients) in the image.
  • a data set for training of a model includes a number of data pairs. Each pair includes input data for the training machine-trainable model and desirable output data (label) from the model in response to the input data. For example, for a machine-trainable model to identify food ingredients, the input data includes an image of a predetermined size that features one or more food ingredients, and the desirable output data includes one or more identifiers (names) of the featured food ingredients. For another example, for a machine-trainable model to evaluating progress of a recipe step, the input data includes images of food being prepared, and the desirable output data includes a percentage indicating progress of a food preparation step. Training of Machine-trainable Model
  • a supervised learning technique can be used to prepare the machine-trained model. Any known learning technique can be applied to the training of the model as long as the technique can configure the model to output, in response to training input images, a name (identifier) of food ingredient within a predetermined allowable error rate.
  • a convolutional neural network is used to construct the machined trained model.
  • CNN convolutional neural network
  • a convolutional neural network requires a smaller number of model parameters when compared to a fully connected neural network.
  • a neural network other than CNN can be used.
  • Figure 19 depicts an example architecture of a computing system 160 that can be used to perform one or more of the techniques described herein or illustrated in other drawings.
  • the general architecture of the computing system 160 includes an arrangement of computer hardware and software modules that may be used to implement one or more aspects of the present disclosure.
  • the computing system 160 may include many more (or fewer) elements than those shown in Figure 19. It is not necessary, however, that all of these elements be shown in order to provide an enabling disclosure.
  • the computing system 160 includes a processor 1610, a network interface 1620, a computer readable medium 1630, and an input/output device interface 1640, all of which may communicate with one another by way of a communication bus.
  • the network interface 1620 may provide connectivity to one or more networks or computing systems.
  • the processor 1610 may also communicate with memory 1650 and further provide output information for one or more output devices, such as a display (e.g., display 1641), speaker, etc., via the input/output device interface 1640.
  • the input/output device interface 1640 may also accept input from one or more input devices, such as a camera 1642 (e.g., 3D depth camera), a keyboard, a mouse, a digital pen, a microphone, a touch screen, a gesture recognition system, a voice recognition system, an accelerometer, a gyroscope, a thermometer, an optical temperature measurement system, a sonar, a LIDAR device, a laser device, etc.
  • a camera 1642 e.g., 3D depth camera
  • the memory 1650 may store computer program instructions (grouped as modules in some implementations) that the processor 1610 executes in order to implement one or more aspects of the present disclosure.
  • the memory 1650 may include RAM, ROM, and/or other persistent, auxiliary, or non-transitory computer-readable media.
  • the memory 1650 may store an operating system 1651 that provides computer program instructions for use by the processor 1610 in the general administration and operation of the computing system 160.
  • the memory 1650 may further include computer program instructions and other information for implementing one or more aspects of the present disclosure.
  • the memory 1650 includes a user interface module 1652 that generates user interfaces (and/or instructions therefor) for display, for example, via a browser or application installed on the computing system 160.
  • the memory 1650 may include an image processing module 1653, a machine-trained model 1654 that may be executed by the processor 1610.
  • image processing module 1653 a machine-trained model 1654 that may be executed by the processor 1610.
  • the computing system 160 can have a multiple of one or more of these components (e.g., two or more processors and/or two or more memories).
  • Logical blocks, modules or units described in connection with implementations disclosed herein can be implemented or performed by a computing device having at least one processor, at least one memory and at least one communication interface.
  • the elements of a method, process, or algorithm described in connection with implementations disclosed herein can be embodied directly in hardware, in a software module executed by at least one processor, or in a combination of the two.
  • Computer-executable instructions for implementing a method, process, or algorithm described in connection with implementations disclosed herein can be stored in a non- transitory computer readable storage medium.

Abstract

This application discloses a technology for guiding a person to prepare foods at a food preparation station. The food preparation station has a plurality of food pans. The technology may track location changes of the food pans or ingredients contained in the food pans, and indicating the current location of an ingredient when needed. The technology monitors a dish being prepared, and provides a step-by-step guidance according a predetermined recipe.

Description

KITCHEN SYSTEM WITH FOOD PREPARATION STATION
PRIORITY CLAIM
The present application claims the priority to and the benefit of the U.S. Application No. 17/464,405 filed on September 1, 2021, and the U.S. Application No. 17/464,430, filed on September 1, 2021, the entire contents of which are incorporated herein by reference and relied upon.
BACKGROUND
[001] Restaurants use food preparation stations in their kitchens. A typical food preparation station has food pans containing food ingredients. Restaurant workers prepare a dish using ingredients from the food pans. A change of ingredient location may confuse restaurant workers.
SUMMARY
[002] One aspect of the present disclosure provides a method for use in food preparation. The method may comprise one or more of the steps of: providing a food preparation table, a pan array located next to the food preparation table, food pans arranged on the pan array, indicating lights, and at least one camera; providing at least one database storing data relating to predefined zones of the pan array and the indicating lights, wherein a predefined zone is preassigned to at least one indicating lights and further linked thereto such that the at least one database is to be referred to for linkage between a predefined zone and at least one indicating light preassigned thereto; capturing, using the at least one camera, images of the pan array located next to the food preparation table such that the captured images feature food pans arranged on the pan array and ingredients contained therein; processing the captured images to determine a location on the pan array of an ingredient featured on at least part of the captured images such that the ingredient is determined to be in one of the predefined zones of the array; updating the at least one database to link the ingredient to at least one of the indicating lights that is preassigned to the determined one of predefined zones on the at least one database. [003] In the method, the steps of capturing images of the pan array, processing he captured images, and updating the at least one database are performed repeatedly.
[004] In the method, at a first time a first one of the ingredients may be located in a first one of the predefined zones of the pan array, at a second time the first ingredient may be located in a second one of the predefined zones of the pan array, and at a third time between the first time and the second time, the first ingredient may be moved to from the first predefined zone to the second predefined zone such that at the first time the first ingredient may be linked to a first indicating light preassigned to the first predefined zone on the at least one database and further such that at the second time the first ingredient may be linked to a second indicating light preassigned to the second predefined zone on the at least one database.
[005] The method may further comprise referring to the at least one database to generate a first guidance at the first time and a second guidance at the second time. In an implementation, the first guidance may indicate the first ingredient using the first indicating light as the first ingredient is located in the first predefined zone and is linked to the first indicating light on the at least one database at the first time, and the second guidance may indicate the first ingredient using the second indicating light as the first ingredient is located in the second predefined zone and is linked to the second indicating light on the at least one database at the second time.
[006] In an implementation, processing the captured images may comprise identifying at least part of the ingredients based on color information contained in the at least part of the captured images.
[007] In an implementation, the at least one camera may further capture images of the food preparation table and food being prepared thereon, and the method may further comprise determining completion of a food preparation step based on the captured images of the food being prepared on the food preparation table and further based on a predetermined completion criterion for the food preparation step.
[008] In an implementation, the first guidance may be for a step to prepare a first food item, and the second guidance may be for a step to prepare another food item, for a later step to prepare the first food item, or for the same step to prepare the first food item that is run at a later time.
[009] In an implementation, the at least one database may further store a first recipe comprising a sauce step for spreading sauce on a pizza dough placed on the food preparation table, and the method may further comprise one or more of the steps of: capturing, using the at least one camera, images of pizza preparation on the food preparation table performed by a person, and determining whether the sauce step is completed based on at least part of the captured images of pizza preparation. In an implementation, determining completion of the sauce step may comprise one or more of the steps of: processing an image of pizza preparation captured during the sauce step to identify a first group of pixels, each of which is located within an outer boundary of the pizza dough, obtaining a 2-dimensional area of the pizza dough based on a count of pixels of the first group, processing the image of pizza preparation or its modified version to identify a second group of pixels, each of which belongs to a sauce area where the sauce is applied over the pizza dough, obtaining a 2-dimensional size of the sauce area based on a count of pixels of the second group, and computing a percentage of the 2-dimensional size of the sauce area with reference to the 2-dimensional area of the pizza dough.
[010] In an implementation, the at least one database may further store a first recipe comprising a cheese step for adding cheese over a pizza dough placed on the food preparation table, and the method may further comprise one or more of the steps of: capturing, using the at least one camera, images of pizza preparation on the food preparation table performed by a person, and determining whether the cheese step is completed based on at least part of the captured images of pizza preparation. In an implementation, determining completion of the cheese step may comprise one or more of the steps of: overlaying a grid pattern on an 2-dimensional area of the pizza dough in an image of pizza preparation captured during the cheese step, for each grid unit of the grid pattern, determining if the cheese occupies the grid unit based on color information of the grid unit, and counting the number of grid units occupied by the cheese. In an implementation, for each grid unit, a representative color may be computed, and the representative color may be compared against a predetermined color value to determine if the cheese occupies the grid unit. In In an implementation, the representative color may be an average of pixel color values of pixels within each grid unit.
[OH] In an implementation, the at least one database may further store a first recipe comprising a pepperoni step for adding cheese over a pizza dough placed on the food preparation table, and the method may further comprise one or more of the steps of: capturing, using the at least one camera, images of pizza preparation on the food preparation table performed by a person, and determining whether the cheese step is completed based on at least part of the captured images of pizza preparation. In an implementation, determining completion of the pepperoni step may comprise one or more of the steps of: processing an image of pizza preparation captured during the pepperoni step to compute a count of pepperoni slices placed over the pizza dough, and determining completion of the pepperoni step when the computed count of pepperoni slices is equal to or greater than a predetermined number.
[012] Another aspect of the present disclosure provides a food preparation system. The food preparation system may comprise one or more of: a food preparation table; a pan array located next to the food preparation table; food pans arranged on the pan array; indicating lights configured to indicate predefined zones of the pan array; and at least one camera configured to capture images of the pan array. The food preparation system may further comprise one or more of: at least one database storing data relating to predefined zones of the pan array and the indicating lights, wherein a predefined zone is preassigned to at least one indicating lights and further linked thereto such that the at least one database is to be referred to for linkage between a predefined zone and at least one indicating light preassigned thereto; and a computing system configured to operate the food preparation system.
[013] In an implementation, the computing system may be configured to cause the food preparation system to perform one or more of the following actions: capturing, using the at least one camera, images of the pan array located next to the food preparation table such that the captured images feature food pans arranged on the pan array and ingredients contained therein; processing the captured images to determine a location on the pan array of an ingredient featured on at least part of the captured images such that the ingredient is determined to be in one of the predefined zones of the array; updating the at least one database to link the ingredient to at least one of the indicating lights that is preassigned to the determined one of predefined zones on the at least one database; repeating capturing images of the pan array, processing the captured images and updating the at least one database; and generating guidances for a person working at the food preparation table with reference to the at least one database.
[014] In an implementation, if at a first time a first one of the ingredients is located in a first one of the predefined zones of the pan array, at a second time the first ingredient is located in a second one of the predefined zones of the pan array, and at a third time between the first time and the second time, the first ingredient is moved to from the first predefined zone to the second predefined zone, the computing system may be configured to cause the food preparation system to update the at least one database such that at the first time the first ingredient is linked to a first indicating light preassigned to the first predefined zone on the at least one database, and at the second time the first ingredient is linked to a second indicating light preassigned to the second predefined zone on the at least one database, and further such that a first one of the guidances indicates the first ingredient using the first indicating light as the first ingredient is located in the first predefined zone and is linked to the first indicating light on the at least one database at the first time, and a second of the guidances indicates the first ingredient using the second indicating light as the first ingredient is located in the second predefined zone and is linked to the second indicating light on the at least one database at the second time.
[015] In an implementation, the computing system may be further configured to determine color information contained in the at least part of the captured images and to identify at least part of the ingredients based on the color information.
[016] In an implementation, the at least one camera may be further configured to capture images of the food preparation table and food being prepared thereon, and the computing system may be configured to determine completion of a food preparation step based on the captured images of the food being prepared on the food preparation table and further based on a predetermined completion criterion for the food preparation step.
[017] In an implementation, the at least one database may store a recipe comprising a sauce step for spreading sauce on a pizza dough placed on the food preparation table, and the computing system may be configured to perform one or more of the following actions: processing an image of pizza preparation captured during the sauce step to identify a first group of pixels, each of which is located within an outer boundary of the pizza dough; processing the image of pizza preparation or its modified version to identify a second group of pixels, each of which belongs to a sauce area where the sauce is applied over the pizza dough; and determining completion of the sauce step based on a count of pixels of the first group and further based on a count of pixels of the second group.
[018] In an implementation, the at least one database may store a recipe comprising a cheese step for adding cheese over a pizza dough, and the computing system may be configured to perform one or more of the following actions: overlaying, on an image of pizza preparation captured during the cheese step, a grid pattern comprising a plurality of unit grids; identifying a first group of unit grids, each of which is located within an outer boundary of the pizza dough; identifying a second group of unit grids, each of which belongs to a cheese area where cheese is applied over the pizza dough; and determine completion of the cheese step based on a count of unit grids of the first group and further based on a count of unit grids of the second group.
[019] In an implementation, the at least one database may store a recipe comprising a pepperoni step for placing pepperoni slices over a pizza dough, and the computing system may be configured to perform one or more of the following actions: processing an image of pizza preparation captured during the pepperoni step to compute a count of pepperoni slices placed over the pizza dough; and determining completion of the pepperoni step when the computed count of pepperoni slices is equal to or greater than a predetermined number.
[020] In an implementation, images of the pizza preparation may comprise at least one image containing the person’s hand overlaying at least part of a pizza dough, and the at least one image containing the person’s hand may not be used to determine completion of the sauce step, the cheese step, and the pepperoni step.
BRIEF DESCRIPTION OF THE DRAWINGS
[021] The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
[022] Figure l is a flow chart for preparing a pizza according to an implementation.
[023] Figure 2A illustrates a kitchen system according to an implementation.
[024] Figure 2B is a side view of the station of Figure 2A.
[025] Figure 3 illustrates a food pan array viewed from the top according to an implementation.
[026] Figure 4A is a photograph of an example food preparation station according to an implementation.
[027] Figure 4B is a photograph showing a food pan array of the example station of Figure 4A.
[028] Figure 4C shows a camera system of the example station of Figure 4A.
[029] Figure 4D shows a light indicator of the example station of Figure 4A.
[030] Figure 5 is a flow chart of overall process of providing food preparation guide to a person according to an implementation. [031] Figure 6A illustrates data of a recipe according to an implementation.
[032] Figure 6B illustrates data of food preparation history according to an implementation.
[033] Figure 6C illustrates data of a person according to an implementation.
[034] Figure 7 is a flowchart of determining and storing locations of food ingredients according to an implementation.
[035] Figure 8 illustrates data of food ingredients and their locations according to an implementation.
[036] Figure 9 is a flowchart of providing a step-by-step food preparation guidance according to an implementation.
[037] Figure 10 is a flowchart of providing guidance for an individual step of a recipe according to an implementation.
[038] Figure 11 is a flowchart of determining progress of a recipe step according to an implementation.
[039] Figures 12A is an example screen for a dough preparation step according to an implementation.
[040] Figures 12B is a photograph of a pizza dough being prepared according to an implementation.
[041] Figures 13 A illustrates a screen for a sauce adding step according to an implementation.
[042] Figures 13B is a photograph of a sauce adding step according to an implementation.
[043] Figures 14A is an example screen for a cheese adding step according to an implementation.
[044] Figures 14B is a photograph of a cheese adding step according to an implementation.
[045] Figures 14C is another photograph of a cheese adding step according to an implementation.
[046] Figures 15A is an example screen for a topping adding step according to an implementation.
[047] Figures 15B is a photograph of a topping adding step according to an implementation. [048] Figure 16 illustrates a screen for a topping adding step according to an implementation.
[049] Figure 17 illustrates a screen notifying a completed food preparation according to an implementation.
[050] Figure 18 is an example screen to provide performance feedback according to an implementation.
[051] Figure 19 illustrates one or more computing systems for use with one or more implementations.
DETAILED DESCRIPTION
[052] Hereinafter, implementations of the present invention will be described with reference to the drawings. These implementations are provided for better understanding of the present invention, and the present invention is not limited only to the implementations. Changes and modifications apparent from the implementations still fall in the scope of the present invention. Meanwhile, the original claims constitute part of the detailed description of this application.
Food Preparation Station
[053] Restaurants use food preparation stations in their kitchens. A typical food preparation station has a food preparation table and food pans containing food ingredients. Restaurant workers (workers) prepare a food on the food preparation table using ingredients from the food pans.
Recipe Guidance and Food Pan Indicating Light
[054] To help workers prepare food, guidance for preparing food may be provided on the food preparation station. Workers may follow such instructions to prepare food. The station may be provided with indication lights for indicating food pans. To help workers locate ingredients quickly, the station may turn on an indicating light to indicate a food pan containing a particular ingredient to be used at a particular step of the instructions. Sometimes, however, the food pan indicated with the indicating light may contain another ingredient, which may confuse workers. Tracking Changes of Ingredient Location
[055] An enhanced food preparation station may be associated with a system that tracks location changes of the food pans or ingredients contained in the food pans. The system may have the very current location of each ingredient contained in each food pan. Then, the system can use the accurate location of each ingredient from the system and turn on the indicating light(s) for indicating the correct ingredient to be used at each step of the instructions. The configuration and operation of an enhanced food preparation station will be described with reference to an example recipe.
Pepperoni Pizza
[056] Figure 1 illustrates a flow chart for preparing a pepperoni pizza on a food preparation station before the pizza is baked in a pizza oven or furnace. Step 1 is preparing a dough, which is followed by Step 2 for adding sauce on the dough. Then, at Step 3, cheese is added over the sauce, which is followed by Step 4 for adding pepperoni over cheese. As exemplified, a flow of preparing a pizza includes steps of sequentially stacking a food ingredient over a pizza dough. While a pepperoni pizza recipe is discussed herein, the station can guide a person to prepare different pizzas and various dishes other than pizzas.
FOOD PREPARATION SYSTEM
Food Preparation Station
[057] Figure 2A illustrates a kitchen system according to an implementation. Figure 2B illustrates a side view of the station of Figure 2A. Figure 3 illustrates a food pan array viewed from the top. The food preparation station 100 of Figure 2A includes a food preparation table 110 and a food pan array 120. The station 100 further includes a display 130, light indicators 140, at least one camera 150, a computing system 160, a database 170, and an ID card reader 180. Figure 4A to Figure 4D are photographs of an example food preparation station 4100.
Food Preparation Table
[058] The food preparation table 110 provides a working surface on which food is prepared. Figure 2B shows a person 210 preparing a pizza 220 on the table 110. The table 110 is adjacent to the food pan array 120 such that the person 210 can pick up food ingredients from the array 120 without having to step toward the array 120. The station of Figure 4A has a food preparation table 4120 with two pizzas 4121, 4122 being prepared. The table 4120 is sized such that two persons can work at the same time.
Food Pan Array
[059] A food pan array is for temporarily storing food ingredients. The food pan array 120 of Figure 3 includes a frame 310 and a plurality of food pans 320 placed on the frame 310. Figure 4B shows another food pan array 4110. In the example of Figure 3, the food pans 320 are arranged in 6 columns and 2 rows. A food pan array may have a different arrangement from the examples.
Food Pans
[060] In an implementation, each one of the food pans 320 is a container for storing one or more food ingredients. The pans may be in the same size or different sizes. The pans may be in the same shape or different shapes. A food pan may be used with or without a lid or cover. Figure 4B example food pans 4420 containing ingredients to prepare pizzas.
Food Pan Frame - Rail Structure
[061] In an implementation, the frame may have a rail structure on which one or more food pans are placed. Referring to Figure 4B, the food pan array 4110 have two elongated bars (rails) 4410 on which food pans 4420 are placed in a row. Each food pan has a flange to be slidably placed on the two elongated rails such that each food pan can slide along the rails 4410 and change its location in the array 4110.
Food Pan Frame - Recesses
[062] In an implementation, the frame may include a plurality of recesses (or holes), each of which is to receive one or more food pans. One or more food pans can be placed into each recess. In embodiments, a frame may have a structure different from the examples for holding one or more food pans.
Light Indicators [063] In an implementation, light indicators are used to visually indicate locations of food ingredients. Referring to Figure 3, a light indicator 141 is provided above a pepperoni pan 321. When pepperoni is needed for the pizza 220, the indicator 141 may be selectively turned on to draw the person’s attention to the pan 321 and to indicate location of pepperoni while the other light indicators are not turned on. Alternatively, to indicate the pepperoni pan 321, the indicator 141 may be turned off while all the other light indicators are turned on.
Location of Light Indicators
[064] In Figure 2A, for example, the light indicators 140 are installed on the frame 310. In implementations, one or more lights may be attached to a pan of the array 120 such that the lights are visible to the person 210. In implementations, a lighting device such as a spotlight installed over the station may highlight a particular food pan to indicate ingredient contained therein.
Positional Association Between Indicator and Pan
[065] Light indicators may be arranged according to a predetermined layout from which the person 210 can recognize which pan is associated which light and will pay attention to a particular pan when an indicator is on. For example, in Figure 3, a series of light indicators 142 are installed along an upper edge of the frame 310 and above Row 2 of food pans. The light indicators 142 are sized and arranged such that each indicator is positioned right above its corresponding food pan of Row 2. From the arrangement, the person 210 recognizes that the indicator 141 is associated with the pepperoni pan 321 as it is the closest to the pan 321, and will pay attention to the pepperoni pan 321 when the indicator 141 is on. In Figure 3, for another example, a light strip 144 is installed along a lower edge of the frame 310 and under Row 1 of food pans, and a group of six lights 146 is right under the sauce pan 323. Turning on the six lights 146 would suggest the person 210 to pay attention to the sauce pan 323 rather than other pans because the sauce pan 323 is the closest pan right above the lights 146.
Indicator Not Suggesting a Particular Pan
[066] In Figure 3, among the lights 145 of the light strip 144, two lights 148 are not distinctively close to a particular pan, and do not overlap any food pan along a column direction. While the system may turn on a group of lights 147 to indicate the cheese pan 324 and turn on another group 146 to indicate the sauce pan 324, the system may not turn on the two lights 148 interposed between the two groups 146, 147. In implementations, the system may not operate an indicator in association with a particular food pan when the person would not recognize that the pan is associated with the indicator from the indicator’s location on the frame 310.
Two or More Indicators for a Single Pan
[067] In implementations, two or more indicators are assigned to a single food pan. Referring to Figure 4D, a light indicator 4140 includes two LED light strips 4141, 4142 installed above a food pan 4421. The two strips 4141, 4142 may operate together or independently to draw a person’s attention to the pan 4421. When two pizzas 4121, 4122 are being prepared on the table 4120 as shown in Figure 4B, the lower strip 4141 may be turned when the pan’s ingredient is needed for the left pizza 4121, and the upper strip 4142 may be turned on when the pan’s ingredient is needed for the right pizza 4122 although not limited thereto.
Controlling Indicators Referencing to Database
[068] To indicate locations of food ingredients using light indicators, the system may have location information for each indicator and also have information of which indicator is associated which ingredient. In implementations, for each food ingredient, the system stores the location of the ingredient in connection with one or more light indicators that has positional association with the ingredient as exemplified in Figure 8. When an ingredient is needed to prepare the pizza 220, the system may locate one or more light indicators to turn on based on link between the ingredient and the one or more light indicators on the database.
Operation Modes of Light Indicators
[069] A light indicator may stay turned-on, flashes, or change its color and brightness to indicate location of its corresponding food ingredient or to indicate a status of the food ingredient. The light indicator may operate in a way different from the example to draw the person’ s attention. Di splay
[070] The display 130 is for displaying food preparation information for the person 210 working at the station 100. For example, the display 130 may display one or more of a received order, instructions to prepare an ordered pizza, the current progress of pizza preparation, and a performance feedback after the pizza is prepared.
Location of Display
[071] The display 130 may be placed over the food pan array 100 although not limited thereto. In an implementation, the display 130 may be installed next to the table such that the person can see the pizza 200 and the display 130 at the same time. In implementations, the display 130 is facing the person 210 such that the person can read information on the display while preparing the pizza 220 on the table 110.
Two or More Displays
[072] In an implementation, a food preparation may use two or more displays. In Figure 4A, the station 4100 has two independent displays 4131, 4132. The left display 4131 may provide guidance for a first person to prepare the left pizza 4121, and the right display 4132 may provide guidance for a second person to prepare the right pizza 4122 although not limited thereto.
Camera
[073] The system includes one or more cameras 150 for capturing images of the table 110 and the array 120. Referring to Figure 2B, a camera 152 is installed for monitoring food ingredients in the pans 320, and another camera 151 is installed for monitoring the pizza 220 being prepared on the table 110. In an implementation, a single camera may monitor both of the table 110 and the food pan 320. In the station of Figures 4A to 4C, a camera 4151 is provided for monitoring food preparation on the table 4120 and another camera 4152 is provided for monitoring food ingredients in the array 4110.
Camera Location
[074] The camera of Figure 2B is installed over the food pan array 120 and the display 130 to not interfere the person’s sight or action. In Figure 4, the two cameras 4150 are installed over the displays 4131, 4132 and the food pan array 4110. In implementations, a camera system may be at a location different from the examples.
Additional Monitoring Devices
[075] In an implementation, the station 100 includes a device other than a camera to monitor food ingredients or the pizza 220 being prepared. For example, one or more thermometers may monitor temperature of each food ingredient or the pizza. A weight measurement system can be used to measure the weight of the pizza 220 or a food ingredient contained in a food pan. A laser scanner or a light detection and ranging (LIDAR) device may be used for measuring a thickness of a food ingredient (e.g., pizza dough, cheese over the pizza dough) or for measuring location and distribution of an ingredient on the pizza 220. In an implementation, a device other than the examples may be used.
Computing System
[076] The computing system 160 is for process information relating to operation of the station 100. The computing system 160 is connected to the display 130, the light indicators 140, the camera 150, the database 170 and the ID card reader 180. The computing system 160 may communicate with a device outside the station 100. In an implementation, the computing system 160 can be outside a kitchen where the food preparation table 110 is located, and communicates with other devices of the station 100 via a communication network. In an implementation, the computing system 160 communicates with another computing system to obtain information of an order for a pizza. In an implementation, the computing system 160 can use computing power of another system (e.g., cloud computing). An example architecture of one or more computers systems for use with one or more implementations will be described in detail with reference to Figure 19.
Database
[077] The database 170 is for storing data for providing food preparation guidance. The database 170 may be one or more of a local data store of the computing system 160 and a remote data store connected to the computing system 160 via a communication network. The database 170 may store a plurality of recipes that may be prepared at the station, profiles of worker or person, and history of food preparation works done at the station 100. For each recipe, the database 170 may store information of necessary ingredients, and locations of the ingredients. For each worker or person, the database 170 may store a skill level for each pizza and history of food preparation works. The database 170 may store additional data other than the example, and may not store one or more of the examples. Data stored on the database 170 will be described in detail with reference to other drawings.
ID card Reader
[078] The ID card reader 180 is for check-in and check-out of the person 210 at the station 100. The station may include 100 includes one or more of an ID card reader, a keypad, and a face recognition system. The station 100 may include a device other than the example devices. Figure 4A shows two ID card readers 4181, 4182 installed on a frame of the array 4110.
PROVIDING FOOD PREPARATION GUIDANCE
[079] Figure 5 is a flow chart for providing guidance to prepare food, here a pizza. In response to an assignment to prepare a pizza at the station 100, the system may retrieve data of a worker or person, retrieve recipe data of the ordered pizza, and provide guidance according to the retrieved recipe data.
Retrieving Worker Data (S510)
[080] In response to a check-in of the person or worker 210 or upon initiation of ***, the computing system 160 may locate the person’s profile on the database 170. The computing system may load data of the located profile on its local memory, or may use data already stored on its local memory without newly retrieving data from the database 170. An example profile of a worker will be discussed with reference to Figure 6C. This step is optional and may be omitted.
Retrieving Recipe (S520)
[081] In response to an order for the pizza 220 or upon initiation, the computing system 160 locates the pizza’s recipe on the database 170 and loads data of the recipe on a local memory. This step S520 may precede the step of retrieving worker data S510. The two steps S510, S520 may be performed in parallel. In an implementation, the computing system 160 uses data stored on its local memory without newly retrieving recipe data from the database 170. An example recipe (pepperoni pizza) will be discussed with reference to Figure 6A.
Providing Guidance (S530)
[082] Based on the recipe data and the person’s profile, the system may provide a food preparation guidance to the person 210. For example, the system may display a text instruction on the display 130, play an audio or video guide, and turn on a light indicator to notify location of a pizza ingredient. The system may provide different instructions based on the person’ s experience level or work history related to the current recipe. Example data for use in providing food preparation guidance will be described in detail with reference to Figure 6A to Figure 6C.
RECIPE DATA
Recipe Data
[083] Figure 6A shows data of an example recipe stored on the database 170. Figure 6B show an example food preparation history. Figure 6C shows example data of a worker (a station user). According to Figure 6A, the database stores, for each recipe, recipe name 610, step number 620, instruction 630, ingredient 640 and step completion requirement 650. According to Figure 6B, the database stores a log of completed orders. For each order, the database stores an order number 681, a recipe name 610, a Worker ID 670, Time of Order Received 682, Time of Order Completed 683, and Preparation Speed Rating 684. According to Figure 6C, the database stores profiles of workers. For each worker, the database stores a worker ID 670, one or more recipes 610, a preparation time rating 681, and a preparation quality rating 682, and an experience level 680. In implementations, the database stores data in a way different from the example of Figure 6A to Figure 6C. The database 170 may store additional data different from the example, and may not store one or more of the example data.
Recipe Name (610)
[084] The recipe name 610 is for uniquely identifying each recipe on the database 170. When an order for ‘pepperoni pizza’ is received, a corresponding recipe 600 can be located using the recipe’s name 610. In an implementation, information other than the name of pizza may be used. For example, a predetermined code of a pizza may be used for delivering order information to the computing system 160, and the computing system 160 locates a corresponding recipe using the predetermined code.
Sequence Number (620)
[085] The example recipe 600 of ‘pepperoni pizza’ has four steps in total. Each step is numbered according to its order in the recipe, from Step 1 to Step 4. A recipe may have steps fewer or more than four. The database 170 may store the step order in a way different from the example of Figure 6A.
Instruction (630)
[086] For each step of the example recipe 600, the database may store one or more instructions to help the person 210 during each of the recipe steps. The instructions may include one or more of a text message, an audio message and a video guide predetermined for the recipe step. For example, when the person 210 needs to perform Step 1 (preparing a dough), the system may locate a first message 631 linked to Step 1 and deliver the first message to the restaurant worker.
Text Instructions
[087] In an implementation, the first message 631 includes a text instruction “Prepare a 10-inch dough”, the second message 632 includes a text instruction “Place sauce on 3/4 of dough”, the third message 633 includes a text instruction “Place cheese to cover 90% of sauce”, the message 534 includes a text instruction “Place 12 slices of pepperoni”. These text messages may be presented on the display 130 to guide a restaurant worker.
Audio and Video instructions
[088] In an implementation, the database stores an audio or video instruction for a recipe step, and the system plays the audio/video instruction at the beginning or during the recipe step. For example, when Step 1 is completed, the system delivers a voice instruction saying “Place sauce on 3/4 of dough” for Step 2. For another example, during Step 2, the system may play a video guide showing how to apply sauce repeatedly on the display 130. Selective Instructions Based on Monitoring of Food Preparation
[089] In implementations, among instructions stored on the database 170, the system may provide one or more instructions selectively based on monitoring of the pizza 220. The system may select one or more instructions among a set of predetermined instructions based on one or more features identified from monitoring of the pizza being prepared. In implementations, the system may generate a new instruction that is suitable for the current status of the pizza 220. For example, during Step 2 (adding sauce), the system may request to add more sauce when it is determined the amount of added sauce is not sufficient to complete Step 2.
Ingredient (640)
[090] For each step of the recipe 600, one or more ingredients are linked on the database 170. For example, Step 1 for preparing a dough is linked to ‘dough’, and Step 2 for adding sauce is linked to ‘sauce’. In an implementation, no ingredient may be linked to a recipe step when the step does not involve addition or removal of an ingredient.
Completion of Recipe Step (650)
[091] For each step of the recipe 600, the database 170 stores one or more requirements to determine whether the step is completed. The requirements may include one or more of (1) a desirable amount or count of an ingredient to be added (or removed) during the current step, (2) a size of an ingredient on the pizza 220, (3) a shape of the ingredient, (4) a desirable position of the ingredient, (5) distribution of the ingredient, (6) distance between individual pieces of the ingredient, (7) a temperature of the pizza 220, (8) a predetermined time limit of the current step, and (9) a quality or status of the ingredient (e.g., freshness, frozen, melt, chopped, deformation). For example, the system may determine that Step 4 (adding pepperoni) is completed when at least 12 slices of pepperoni (each sized greater than a predetermined minimum size) are added on the pizza 220. In an implementation, a requirement different from the examples may be used to determine a completed step.
Evaluating Preparation Quality of Recipe Step
[092] In an implementation, the system may evaluate the quality pizza preparation for each of the recipe step. To evaluate the preparation quality, the system may consider one or more features discussed above for determining step completion. In an implementation, the system may evaluate a recipe step using one or more criteria different the step completion requirements. For example, the system may compute a rating for Step 4 (adding pepperoni) based distribution of pepperoni slices on the pizza 220 when completion of Step 4 is be determined based on the count of the pepperoni slices. In an implementation, the database 170 may store one or more criteria to evaluate a preparation quality of the pizza 220 for each recipe step.
Work History
[093] The database 170 may stores records of orders prepared (or bring prepared) at the station 100. As shown in Figure 6B, the database 170 may store, for each order, one or more of an order number 681 uniquely identifying the order, the name of ordered pizza 610, an identification 670 of a person who prepared the ordered pizza, a time when the order is received 682, a time when the ordered pizza is prepared 683, and a speed rating of pizza preparation work 684. In an implementation, the database 170 may store a data different from the examples of Figure 4. In an implementation, the database 170 may store pizza orders prepared at a station other the station 100
Worker ID (670)
[094] The database 170 may stores a worker ID that is uniquely identifying a worker on the database. When a person taps his ID card to the card reader 180, the computing system may obtain the person’s ID (HKL) and locate data of the person on the database. In an implementation, as shown in Figure 6B, a worker ID is linked with orders 681 the worker prepared such that the worker’s performance or experience level may be determined based on the person’s order history.
Preparation Speed Rating (684)
[095] The system may compute, for each completed order, a rating that represents how fast the ordered pizza had been prepared. The system may compute a preparation time of the ordered pizza using the order received time 682 and the pizza completion time 683, and compares it with a predetermined desirable preparation time for the ordered pizza to determine the speed rating 684. The system may measure the preparation time of the pizza from the start of the first recipe step on the table. In an implementation, the system may measure a completion time and evaluate preparation speed for each recipe step.
Worker Profile
[096] In Figure 6C, the database 170 stores a profile for each worker of the station 100. For each worker, the database 170 may store one or more of a Worker ID 670, recipe names 610 of pizzas the worker prepared, a preparation speed rating 684 representing the worker’s pizza preparation speed, and a preparation quality rating 685 representing the worker’s work quality, and an experience level 690 of the worker. In an implementation, the database 170 may store data different from the examples.
Preparation Quality Rating (685)
[097] The system may compute a preparation quality rating representing how properly the worker prepared pizzas in accordance with their predetermined recipes and quality standards. For example, for each recipe of pizzas a worker prepared, the system may evaluate preparation quality for each individual step of the recipe, and compute a percentage of steps satisfying a predetermined quality standard. The preparation quality rating 685 can be determined in a way different from the example.
Experience Level (690)
[098] The database 170 may store an experience level for each recipe linked to the worker ID 670. The experience level for a recipe may be determined based on one or more of the number of pizzas the worker prepared using the recipe, the worker’s preparation time rating 684, and the worker’s preparation quality rating 685. The experience level may be determined considering another factor different from the examples.
Different Instructions for Different Experience Levels
[099] In an implementation, in providing guidance to prepare the pizza 220, the system may consider the profile of the person 210 preparing the pizza 220 at the station 100. The system may provide different instructions based on one or more of the person’s experience level 690 and the ratings 684, 685 about the ordered pizza (its recipe). For example, the system may provide no or limited guidance when the worker is well experienced about the ordered pizza, and may provide a more detailed guidance when the worker has a lower level of experience about the ordered pizza.
UPDATING FOOD INGREDIENT LOCATION
[100] The kitchen system indicates the location of an ingredient within the pan array while food is being prepared. To inform the location, the system needs to have the current location of the necessary ingredient, and the specific light indicator associated with the current location of the ingredient. The system performs a process to keep data current for notifying the locations of food ingredients within the pan array.
Process of Updating Ingredient Locations
[101] Figure 7 is an example process to update locations of food ingredients. The process includes capturing images of the food pan array (S710), processing captured images to determine the location of each food ingredient (S720), determining one or more indicators associated with the location of each food ingredient (S730), storing association between food ingredients and light indicators on the database 170 (S740).
Capturing Images of Food Pan Array (S710)
[102] At least one camera captures images of the array 120. The images of the array 120 may be captured continuously, periodically or intermittently. The captured images are then sent to the computing system 160 (or another computing device) for further processing. In implementations, the camera 150 may acquire a video of the array 120 continuously, and send at least part of the video frames to the computing system of another computing device.
Identifying Ingredients in Pans
[103] The computing system 160 may process one or more images of the array 120 to identify food pans and food ingredients. In implementations, the computing system 160 with appropriate software processes one or more images to locate each food pan in the images. In implementations, the computing system 160 may perform image segmentation of camera image(s) using a machine-trained model, and identify one or more food pans (or food ingredients) corresponding to segment(s) in the camera images(s). In implementations, for each identified food pan, the computing system may compute one or more features (e.g., color, shape, and size, volume) of its contained material, and determine that a particular ingredient is contained in the pan when the computed feature(s) match the ingredient’s feature(s) stored on the database. The system may identify food pans or food ingredients using an approach different from the examples.
Determining Location of Ingredient (S720)
[104] The computing system 160 determines location of each food pan (or food ingredient) identified from processing of the images of the array 120. In implementations, the computing system 160 may process the images of the array 120 to determine a reference (e.g., a corner point, a center point) for each pan and to compute a coordinate of the pan’s reference point from a reference point of the frame 310 (e.g., a corner point, a center point). The computing system 160 may store the computed coordinate on the database 170 as the location of the pan’s food ingredient. In implementations, when food pans are arranged columns and rows as in Figure 3, the system may store the location of the pepperoni pan 321 as Row 2, Column 2 as shown in Figure 8.
Determining Indicator Corresponding to Ingredient (S730)
The system may determine one or more indicators that will draw attention to a particular food pan based on positional relationship between the indicator and the ingredient. Referring to Figure 3, the light indicators 142, 144 are installed on the frame according to a predetermined layout. The location of the pepperoni pan 321 (Row 2, Column 2) is determined from processing of camera images. The system may assign the indicator 141 to the pan 321 as no other indicator is closer to the pan 321 and no other pan is closer to the indicator 141. In implementations, the system may associate an indicator with a pan when they are within a predetermined distance from each other although not limited thereto. In implementations, the system may use a map of food pan array that defines one or more indicator assignment zones. For each zone of the food pan array, the system assigns at least one light indicator based on positional association between the zone and the indicator such that turning on the indicator would draw the person’s attention to the zone. When it is determined that an ingredient (or a pan) is located at an indicator assignment zone, the system associates or links, on the database, the ingredient (or the pan) to the indicator assigned to the zone such that the indicator may be turned on to indicated location of the ingredient. Updating Database to Store Indicator Associated with Ingredient (S740)
[105] The system may store on the database 170 information of which light indicator is associated with which food ingredient. Each food ingredient may be linked to at least one light indicator on the database. In Figure 8, for example, cheese is linked to the location of the cheese pan 324 (Row 1, Column 3) which is linked to the light group 147, and accordingly cheese is linked to the light group 147. Based on this association between cheese and the light group 147, the system may operate the light group 147 to indicate the location of cheese in the array 120.
Updating Pan Location Changes Real Time
[106] In implementations, the system may perform the process of Figure 7 continuously, periodically or intermittently to maintain the database 170 current and to reflect a pan location without delay. The system may perform the process independent of providing step-by-step instructions for the pizza 220. The system may perform the process while it is providing instructions to prepare the pizza 220 such that the system can update the database real-time in response to a pan location change during the preparation of the pizza. The system may perform the process during a waiting time after completing a pizza such that a pan location change is reflected on the database before preparing another pizza.
Responding to Location Change Due to Food Pan Refill
[107] Sometimes, location of a food pan may be moved in the food pan array 120 after refilling the food pan. For example, when the person 210 refills the sauce pan 323 and the cheese pan 324 after preparing a first pizza, the person 210 by mistake may switch locations of the two pans. In response to such pan location change, based on processing of camera images(s), the system updates the database such that the sauce pan 323 is linked to the light 147 and the cheese pan is linked to the light 146. Subsequently when the person 210 prepare a second pizza, the system may turn on the light 147 when sauce is need for the second pizza while it turned on the light 146 when sauce was need for the first pizza.
Monitoring of Additional Feature - Ingredient Amount [108] Besides monitoring locations of food ingredients, the computing system 160 may processes one or more images from the camera 150 to monitor amount (for example, volume) of each food ingredient. The system may determine whether there are enough ingredients in the food pans considering one or more of a received order, an expected order, and a predetermined amount. When it is determined that a food pan does not store enough food ingredient, the system may provide an instruction to refill the food pan. In an implementation, the system may use a weight sensor, a LIDAR system, or another sensor other than the camera system for monitor amount of a food ingredient.
STEP-BY-STEP FOOD PREPARATION GUIDANCE
[109] Figure 9 is a flowchart of providing a step-by-step food preparation guidance based on the example recipe 600. The system may provide guidance for each step sequentially from the first step (Step 1) to the fourth step (Step 4). Operation of the system for each step will be described in detail referencing to other drawings.
Providing Guidance of Individual Recipe Step
[110] Figure 10 is a flowchart of providing guidance for an individual step of a recipe according to an implementation. The process may include providing one or more instructions of the current step (S1010), indicating location of an ingredient necessary for the current step (SI 020), and determining if the current step is completed based on monitoring of the pizza 220 being prepared (S1030). The process of Figure 10 will be explained below using the example recipe 600.
Providing Instruction of Current Step (S1010)
[Hl] The system may locate one or more instructions 630 linked to the current step on the database 170, and provide the instructions to the person 210 working at the station 100. For example, for Step 1 (preparing dough), the system may retrieve the message 631 linked to Step 1 from the database 170, and control the display 130 to present the retrieved message. In Figure 12, the text instruction “Prepare a 10-inch dough” is presented on the display 130 for Step 1.
Activating Indicator Associated with Ingredient of Current Step (SI 020) [112] The system may locate, on the database 170, one or more light indicators linked to an ingredient necessary for the current step. To indicate the location of the necessary ingredient, the system may turn on the one or more light indicators, and turn off other indicators that are not linked to the necessary ingredient. For example, for Step 4 (adding cheese), the system refers to the database 170 shown in Figure 8 to locate the light group 146 that is linked to ‘cheese’. Then, the system may turn on the segment 146 of the light strip to indicate location of cheese in the food pan array 100.
Determining Step Completion (S1030)
[113] For each recipe step, the system may determine whether the current step is completed to move on to the next step. The system may locate one or more completion requirements 650 of the current step from the database of Figure 6A, and may determine the current step is completed when the requirements are satisfied. For example, the completion requirement for Step 4 is to add at least ‘twelve’ slices of pepperoni. The system may process one or more images of the pizza being prepared, count pepperoni placed, and determine that Step 4 is completed when the count reaches twelve. An example process for determining step completion will be described in more detail referencing to Figure 11.
Completion of Recipe
[114] In an implementation, when it is determined that the current step is completed, the system turns off indicator lights activated for the current step, and proceeds to provide guidance for the next step of the recipe. The system may provide a notification that the current step is completed. In an implementation, when it is determined that the last step is completed, the system provides a notification that the pizza is ready for serving to a customer or ready for a further processing. An example screen of Figure 17 shows a notification that all steps at the station 100 are completed and the pizza 220 is ready to bake.
DETERMINING COMPLETION OF INDIVIDUAL RECIPE STEP
Determining Based on Monitoring of Pizza
[115] Figure 11 shows a flowchart of determining completion of a recipe step based on monitoring of a pizza being prepared. The process may include capturing images of the pizza 220 being prepared (SI 110), processing the images to identify one or more ingredients on the pizza 220 (SI 120), computing a progress index of the current step (SI 130), determining whether the current step is completed (SI 140), and repeating the steps (from SI 110 to SI 140) when the current step is not completed.
Capturing Images of Pizza Being Prepared (SI 110)
[116] One or more cameras may be used to monitor a dish being prepared. Referring to Figure 2B, the camera 151 may, periodically or intermittently, capture images of the pizza 220 and send the images to the computing system 160 or another computer for further processing. The camera 151 may acquire a video of the table 110 continuously, and send one or more frames of the video to a computing device for further processing.
Image Processing to Identify Food Ingredient (SI 120)
[117] The system may process one or more images from the camera 150 to identify one or more food ingredients on the pizza 220 being prepared. In an implementation, the computing system 160 detects an object in an image, determines feature(s) (e.g., color, shape, and size) of the obj ect, and determines a food ingredient when the obj ect’ s feature(s) matches the food ingredient’ s data stored on the database. The computing system 160 may use various algorithms other than the examples for identifying food ingredients. In an implementation, the computing system 160 uses a machine-trained model for identifying food ingredient(s) from the camera image(s). For example, the computing system may perform image segmentation of a camera image to find one or more segments each corresponding to an object in the image, to find boundaries separating the segments, and to classify pixels of the images into the segments.
Determining Visible Features of Food Ingredients
[118] In an implementation, the system may process the camera image(s) to determine one or more features for each food ingredient appearing in the camera image(s). For each ingredient, the system may determine one or more of size, count, location and color although not limited thereto. For example, for Step 1 (preparing dough) of the example recipe, the system may compute a size, an area and a color of the dough for use in determining completion of Step 1. For Step 4 (placing 12 slices pepperoni), the system may determine one or more of the number of pepperoni slices added on the pizza 220, the size of each pepperoni slice, and the location of color each pepperoni slice.
Determining Non-visible Feature
[119] In an implementation, the system may determine one or more non-visible features not relying on visual of food ingredients in the camera images. For example, the system may obtain one or more of the temperature of the pizza, the weight of the pizza, and time elapsed for the current step although not limited thereto.
Determining Progress Index (SI 130)
[120] In an implementation, the system may compute an index (measure) representing progress of the current step using one or more features obtained from monitoring of the pizza 220 being prepared. The progress index may be based one or more of the visible features, one or more of the non-visible features, and combination of thereof. Example progress indices will be discussed in detail with reference to Figure 12A to Figure 16.
Determining Step Completion (SI 140)
[121] The system may determine the current step’s completion when the current step’s progress index reaches a predetermined threshold (e.g., 100%). The system may determine the current step’s completion when the completion requirement 650 of the current step is satisfied. Once it is determined that the current step is completed, the system starts to provide guidance for the next step.
STEP-BY-STEP GUIDANCE FOR EXAMPLE RECIPE
Screen for Dough Preparation Step
[122] Figures 12A is an example screen 1200 for Step 1 (dough preparation) of the example recipe 600. Figures 12B is a photograph of an example pizza dough. In Figure 12A, the screen 1200 presents the pizza’s name 1210, the current step’s number 1220, a text instruction for the current step 631, an image (or a video stream) 1230 of the pizza being prepared, a progress indicator 1240, and time elapsed for the order 1260. Progress Based on Size of Dough
[123] Step 1 is to prepare a ‘ 10-inch’ dough. The system may process one or more images of the dough 1250 to compute the dough’s size (e.g., length, diameter, 2-dimensional area). The system may compute progress of Step 1 using the computed dough size. In Figure 12 A, the current progress of 90% is computed as a ratio of the computed dough’s size (9 inches) with the required size (10 inch) for completing Step 1 although not limited thereto. In an implementation, the system may consider one or more of the dough’s shape, 2-dimensional area, thickness, freshness and color to determine progress of Step 1 although not limited thereto.
Completion of Dough Preparation Step
[124] The system may determine completion of Step 1 when the dough’s size satisfies Step l’s predetermined requirement. In an implementation, when a pre-baked dough is used for the pizza 220, the system may determine completion of the dough preparation step when the prebaked dough is placed on the table 110. After determining completion of Step 1, the system starts to provide guidance for the next step in the recipe, Step 2.
Screen for Sauce Adding Step
[125] Figures 13A is an example screen 1300 for Step 2 (applying sauce) of the example recipe 600. Referring to Figure 13A, the screen presents an image 1330 featuring the dough 1250 prepared at Step 1 and sauce 1350 applied over the dough. The screen may also present an instruction 632 for Step 2 and a progress indicator 1340. Figures 13B is a photograph of an example pizza dough with sauce added.
Progress Based on Area of Sauce
[126] Step 2 is to apply sauce over 3/4 of the dough prepared at Step 1. The system may process one or more images of the pizza being prepared to compute a 2-dimensional area of the dough 1250 and a 2-dimensional area of the sauce 1350 placed on the dough. Using the computed areas, the system may compute a ratio of the sauce area to the required area (3/4 of the dough area) as the progress measure of Step 2. In an implementation, the system may compute the dough’s area assuming the dough is in a circular shape and using the diameter of the dough. In an implementation, as shown in Figure 13B, the system may draw a box 1371 surrounding a dough 1372, and may use the box’s area for computing the progress measure. The system may use a processing different from the examples.
Image Segmentation to Identify Sauced Area
[127] In implementations, the system may process the image 1330 using a machine- trained model to identify a first group (segment) of pixels as the sauced area 1350 and to identify a second group (segment) of pixels as the dough 1250 that is not cover with the dough. The system may compute an area of the sauced area 1350 using the number of pixels in the first group, compute an area of the dough using on the number of pixels in the second group, and compute a ratio between the two areas for evaluating progress of Step 2. If the first group (sauce) is of 600 pixels in the image 1330 and the second group (dough not covered with the sauce) is of 400 pixels, the system may determine that 60% of the dough is covered with the sauce.
Completion of Sauce Placing Step
[128] The system may determine completion of Step 2 when the sauced area 1350 is larger than a predetermined percentage of the 2-dimensional area of the dough. In an implementation, the system may determine completion of Step 2 using a criterion other than the area ratio.
Example Screen for Cheese Adding Step
[129] Figures 14A is an example screen 1400 for Step 3 (adding cheese) of the example recipe 600. Referring to Figure 14A, the screen presents an image 1430 featuring the dough 1250 prepared at Step 1, the sauce 1350 applied at Step 2, and cheese 1450 added over the dough. The screen also presents the instruction 633 for Step 3 and a progress indicator 1440. Figures 14B is a photograph of a pizza when cheese is being added. Figures 14C is another photograph showing a cheese adding process.
Computing Progress of Cheese Adding Step
[130] Step 3 is to place cheese to cover 90% of sauce. The system may process one or more images of the pizza being prepared to compute a 2-dimensional area of the sauce 1350 and a 2-dimensional area of cheese added the sauce 1350. The system may compute a ratio of the area of cheese to the area of the sauce as the progress measure 1440 of Step 3. A different process may be used to compute the progress measure.
Virtual Grid to Compute Progress of Cheese Adding Step
[131] In an implementation, the system may use a grid of virtual segments to determine how much cheese is placed on the sauce 1350. In Figure 14A, the system overlays the grid 1470 over the sauced area 1350 to virtually partitioning the sauced area into a plurality of sauced segments 1471. For each unit segment, the system determines whether it is covered with cheese or not, counts the number of cheese-covered segments, and computes a ratio of the cheese-covered segments to the entire sauced segments as the current progress 1440 of Step 3. In determining a cheese-covered segment, the system identifies a cheese-covered portion inside a segment based on the color of cheese and the color of sauce, and determines the segment is a cheese-covered segment when the cheese-covered portion is greater than a predetermined percentage of the segment area. In an implementation, the system identifies compute a representative color (e.g., average) of the segment, and determine the segment is a cheese-covered segment when the average color is closer to that of the cheese although not limited thereto. In Fig 14B, each of the green boxes 1472 represents a cheese-covered segment. In an implementation, the system may compute a progress index of Step 3 using a process different from the example.
Image Segmentation to Identify Cheese
[132] In implementations, the system may process the image 1430 using a machine- trained model to classify a first group (segment) of pixels as cheese, a second group (segment) of pixels as sauce. The system may count the number of pixels for each group in the image 1430 (or its modified version), compute a 2-dimensional area for each group, and determine progress of Step 3 using the pixel counts and the computed areas. For example, if the first group (cheese) is of 300 pixels in the image 1430 and the second group (sauce on the dough) is of 700 pixels, the system may determine that 30% of the sauce is covered with the cheese.
Completion of Cheese Adding Step
[133] In an implementation, the system may determine completion of Step 3 when cheese is placed more than a predetermined percentage of the 2-dimensional area of the pizza dough or a sauced area within the 2-dimensional area (when the computed progress reaches 100%) although not limited thereto. Subsequent to completion of Step 3, the system may provide an instruction to start Step 4.
Example Screen for Pepperoni Adding Step
[134] Figures 15A is an example screen for a pepperoni adding step. The screen 1500 presents a current image 1530 featuring the dough 1250, the sauce 1350, and cheese 1450 prepared at Step 3. The screen also presents an instruction 634 for Step 4 and a progress indicator 1540. Figures 15B is a photograph of a pepperoni pizza being prepared.
Progress Based on Counting of Pepperoni
[135] Step 4 is to add 12 slices of pepperoni over the cheese place at Step 3. The system may process a current image of the pizza to identify pepperoni slices and to count pepperoni slices added over the cheese. In Figure 15 A, the current progress of Step 4 (50%) is computed as the ratio of the current number of pepperoni slices (six) to the predetermined number (twelve) although not limited thereto. In an implementation, the system may count a pepperoni slice when it is greater than a predetermined size. The system may not count a pepperoni slice when it does not meet a predetermined requirement for pepperoni.
Determining Completion of Pepperoni Adding Step
[136] The system may determine completion of Step 4 when the count of pepperoni slices reaches the predetermined number of twelve although not limited thereto. Subsequent to completion of Step 3, the system may provide an instruction to bake the pizza (Figure 17).
Progress Index When Food is Not Fully Visible
[137] Figure 16 shows another example screen 1600 of Step 4 that is subsequent to the screen 1500. In Figure 16, a hand 1610 is adding the seventh pepperoni slice 1670 to the pizza of the image 1530 (having 6 pepperoni slices), but only five pepperoni slices are visible in the image 1630. If a progress index of Step 4 is computed based on the number of currently visible pepperoni slices, the progress should lower than the 50% shown in Figure 15 A. It may confuse the person 210 if the system lowers the progress index real-time when a hand is obstructing the camera’s view. To avoid such confusion, the system may not update a progress index when the pizza being prepared is not fully visible. In an implementation, the computing system 160 processes a camera image to determine the food being prepared is fully visible in the image, and does not consider the image for computing a progress index or evaluating a food preparation quality when the pizza is not fully visible.
Computing Progress Using Machine-Trained Model
[138] For example, the system uses a machine-trained model to compute a progress for a recipe step and to determine completion of the recipe step. In an implementation, the system may train a model such that the model outputs a progress index of a recipe step in response to an input of an image of a pizza being prepared. For example, the system uses a machine-trained model configured to determine completion of Step 3 in response to an image featuring cheese covering a sauced dough.
Recipe Completion Message
[139] When the last step of a current recipe is completed, the system may present a screen that the food is ready for serving or for a further processing. Figure 17 is an example screen 1700 notifying that a pizza prepared at the system is ready to bake.
Performance Feedback
[140] Figure 18 is an example screen 1800 provided after completing all four steps of the example recipe. The feedback screen 1800 includes, for each step, (1) a first performance indices 1810 based on preparation time and (2) a second performance indices 1820 based on preparation quality. In an implementation, the system may provide an additional performance index, and may not provide one or more of the example performance indices.
Performance Rating Based on Preparation Time
[141] In implementations, when a person performs each step of the recipe, the system collects data to evaluate the person’ s performance for each step. For example, the system measures a completion time for each step, compares the measured completion time with a predetermined desirable completion, and computes a performance index representing how fast the worker completed the step. In an implementation, the system updates the person’s preparation time rating 693 using the first performance indices 1810.
Performance Based on Preparation Quality
[142] In implementations, at the end of each recipe step, the system evaluates the step using one or more criteria for determining a properly-performed step. Examples of the criteria were explained in connection with example recipe data. In an implementation, for Step 2, the system computes a performance index representing how evenly the sauce spreads on the dough. In an implementation, the system updates the person’s preparation quality rating 693 using the second performance indices 1820.
MACHINE-TRAINED MODEL (ARTIFICIAL INTELLIGENCE)
[143] In implementations, the computing system 160 uses a machine-trained model for determining location of a food ingredient, and monitoring progress of a recipe step.
Machine-trained Model for Identifying Food Ingredients
[144] A machine-trained model of an implementation is configured to, in response to an input of data of a photographic image, output information of one or more food ingredients featured in the photographic image. In an implementation, the system may use a machine-trained model configured to perform image segmentation of a camera image for identifying objects (pans, food ingredients) in the image.
Data Set for Training Machine-trainable Model
[145] A data set for training of a model includes a number of data pairs. Each pair includes input data for the training machine-trainable model and desirable output data (label) from the model in response to the input data. For example, for a machine-trainable model to identify food ingredients, the input data includes an image of a predetermined size that features one or more food ingredients, and the desirable output data includes one or more identifiers (names) of the featured food ingredients. For another example, for a machine-trainable model to evaluating progress of a recipe step, the input data includes images of food being prepared, and the desirable output data includes a percentage indicating progress of a food preparation step. Training of Machine-trainable Model
[146] In an implementation, a supervised learning technique can be used to prepare the machine-trained model. Any known learning technique can be applied to the training of the model as long as the technique can configure the model to output, in response to training input images, a name (identifier) of food ingredient within a predetermined allowable error rate.
Various Structure of Machine-Trained Model
[147] In an implementation, a convolutional neural network (CNN) is used to construct the machined trained model. In general, a convolutional neural network requires a smaller number of model parameters when compared to a fully connected neural network. In an implementation, a neural network other than CNN can be used.
COMPUTING SYSTEM
General Architecture
[148] Figure 19 depicts an example architecture of a computing system 160 that can be used to perform one or more of the techniques described herein or illustrated in other drawings. The general architecture of the computing system 160 includes an arrangement of computer hardware and software modules that may be used to implement one or more aspects of the present disclosure. The computing system 160 may include many more (or fewer) elements than those shown in Figure 19. It is not necessary, however, that all of these elements be shown in order to provide an enabling disclosure.
Hardware
[149] As illustrated, the computing system 160 includes a processor 1610, a network interface 1620, a computer readable medium 1630, and an input/output device interface 1640, all of which may communicate with one another by way of a communication bus. The network interface 1620 may provide connectivity to one or more networks or computing systems. The processor 1610 may also communicate with memory 1650 and further provide output information for one or more output devices, such as a display (e.g., display 1641), speaker, etc., via the input/output device interface 1640. The input/output device interface 1640 may also accept input from one or more input devices, such as a camera 1642 (e.g., 3D depth camera), a keyboard, a mouse, a digital pen, a microphone, a touch screen, a gesture recognition system, a voice recognition system, an accelerometer, a gyroscope, a thermometer, an optical temperature measurement system, a sonar, a LIDAR device, a laser device, etc.
Software - Computer Program Instructions
[150] The memory 1650 may store computer program instructions (grouped as modules in some implementations) that the processor 1610 executes in order to implement one or more aspects of the present disclosure. The memory 1650 may include RAM, ROM, and/or other persistent, auxiliary, or non-transitory computer-readable media. The memory 1650 may store an operating system 1651 that provides computer program instructions for use by the processor 1610 in the general administration and operation of the computing system 160. The memory 1650 may further include computer program instructions and other information for implementing one or more aspects of the present disclosure. In one implementation, for example, the memory 1650 includes a user interface module 1652 that generates user interfaces (and/or instructions therefor) for display, for example, via a browser or application installed on the computing system 160. In addition to and/or in combination with the user interface module 1652, the memory 1650 may include an image processing module 1653, a machine-trained model 1654 that may be executed by the processor 1610. The operations and algorithms of the modules are described in greater detail above with reference to other drawings.
Multiple Components
[151] Although a single processor, a single network interface, a single computer readable medium, a singer input/output device interface, a single memory, a single camera, and a single display are illustrated in the example of Figure 19, in other implementations, the computing system 160 can have a multiple of one or more of these components (e.g., two or more processors and/or two or more memories).
Other Considerations [152] Logical blocks, modules or units described in connection with implementations disclosed herein can be implemented or performed by a computing device having at least one processor, at least one memory and at least one communication interface. The elements of a method, process, or algorithm described in connection with implementations disclosed herein can be embodied directly in hardware, in a software module executed by at least one processor, or in a combination of the two. Computer-executable instructions for implementing a method, process, or algorithm described in connection with implementations disclosed herein can be stored in a non- transitory computer readable storage medium.
OTHER CONSIDERATIONS
[153] Although the implementations of the inventions have been disclosed in the context of certain implementations and examples, it will be understood by those skilled in the art that the present inventions extend beyond the specifically disclosed implementations to other alternative implementations and/or uses of the inventions and obvious modifications and equivalents thereof. In addition, while a number of variations of the inventions have been shown and described in detail, other modifications, which are within the scope of the inventions, will be readily apparent to those of skill in the art based upon this disclosure. It is also contemplated that various combinations or sub-combinations of the specific features and aspects of the implementations may be made and still fall within one or more of the inventions. Accordingly, it should be understood that various features and aspects of the disclosed implementations can be combined with or substituted for one another in order to form varying modes of the disclosed inventions. Thus, it is intended that the scope of the present inventions herein disclosed should not be limited by the particular disclosed implementations described above, and that various changes in form and details may be made without departing from the spirit and scope of the present disclosure as set forth in the following claims.

Claims

WHAT IS CLAIMED IS
1. A method for use in food preparation, the method comprising: providing a food preparation table, a pan array located next to the food preparation table, food pans arranged on the pan array, indicating lights, and at least one camera; providing at least one database storing data relating to predefined zones of the pan array and the indicating lights, wherein a predefined zone is preassigned to at least one indicating lights and further linked thereto such that the at least one database is to be referred to for linkage between a predefined zone and at least one indicating light preassigned thereto; capturing, using the at least one camera, images of the pan array located next to the food preparation table such that the captured images feature food pans arranged on the pan array and ingredients contained therein; processing the captured images to determine a location on the pan array of an ingredient featured on at least part of the captured images such that the ingredient is determined to be in one of the predefined zones of the array; updating the at least one database to link the ingredient to at least one of the indicating lights that is preassigned to the determined one of predefined zones on the at least one database; wherein the steps of capturing images of the pan array, processing he captured images, and updating the at least one database are performed repeatedly; wherein at a first time a first one of the ingredients is located in a first one of the predefined zones of the pan array, at a second time the first ingredient is located in a second one of the predefined zones of the pan array, and at a third time between the first time and the second time, the first ingredient is moved to from the first predefined zone to the second predefined zone such that at the first time the first ingredient is linked to a first indicating light preassigned to the first predefined zone on the at least one database and further such that at the second time the first ingredient is linked to a second indicating light preassigned to the second predefined zone on the at least one database; and referring to the at least one database to generate a first guidance at the first time and a second guidance at the second time, wherein the first guidance indicates the first -37- ingredient using the first indicating light as the first ingredient is located in the first predefined zone and is linked to the first indicating light on the at least one database at the first time, whereas the second guidance indicates the first ingredient using the second indicating light as the first ingredient is located in the second predefined zone and is linked to the second indicating light on the at least one database at the second time.
2. The method of Claim 1, wherein processing the captured images comprises identifying at least part of the ingredients based on color information contained in the at least part of the captured images.
3. The method of Claim 2, wherein the at least one camera further captures images of the food preparation table and food being prepared thereon, wherein the method further comprises determining completion of a food preparation step based on the captured images of the food being prepared on the food preparation table and further based on a predetermined completion criterion for the food preparation step.
4. The method of Claim 1, wherein the at least one camera further captures images of the food preparation table and food being prepared thereon, wherein the method further comprises determining completion of a food preparation step based on the captured images of the food being prepared on the food preparation table and further based on a predetermined completion criterion for the food preparation step.
5. The method of Claim 1, wherein the first guidance is for a step to prepare a first food item, wherein the second guidance is for a step to prepare another food item, for a later step to prepare the first food item, or for the same step to prepare the first food item that is run at a later time.
6. The method of Claim 2, wherein the first guidance is for a step to prepare a first food item, wherein the second guidance is for a step to prepare another food item, for a later step to prepare the first food item, or for the same step to prepare the first food item that is run at a later time.
-38-
7. The method of Claim 3, wherein the first guidance is for a step to prepare a first food item, wherein the second guidance is for a step to prepare another food item, for a later step to prepare the first food item, or for the same step to prepare the first food item that is run at a later time.
8. The method of Claim 4, wherein the first guidance is for a step to prepare a first food item, wherein the second guidance is for a step to prepare another food item, for a later step to prepare the first food item, or for the same step to prepare the first food item that is run at a later time.
9. The method of any one of Claims 1 to 8, wherein the at least one database further stores a recipe comprising a sauce step for spreading sauce on a pizza dough placed on the food preparation table, wherein the method further comprises: capturing, using the at least one camera, images of pizza preparation on the food preparation table performed by a person, and determining whether the sauce step is completed based on at least part of the captured images of pizza preparation, wherein determining completion of the sauce step comprises: processing an image of pizza preparation captured during the sauce step to identify a first group of pixels, each of which is located within an outer boundary of the pizza dough, obtaining a 2-dimensional area of the pizza dough based on a count of pixels of the first group, processing the image of pizza preparation or its modified version to identify a second group of pixels, each of which belongs to a sauce area where the sauce is applied over the pizza dough, obtaining a 2-dimensional size of the sauce area based on a count of pixels of the second group, and computing a percentage of the 2-dimensional size of the sauce area with reference to the 2-dimensional area of the pizza dough.
10. The method of any one of Claims 1 to 8, wherein the at least one database further stores a recipe comprising a cheese step for adding cheese over a pizza dough placed on the food preparation table, wherein the method further comprises: capturing, using the at least one camera, images of pizza preparation on the food preparation table performed by a person, and determining whether the cheese step is completed based on at least part of the captured images of pizza preparation; wherein determining completion of the cheese step comprises: overlaying a grid pattern on an 2-dimensional area of the pizza dough in an image of pizza preparation captured during the cheese step, for each grid unit of the grid pattern, determining if the cheese occupies the grid unit based on color information of the grid unit, and counting the number of grid units occupied by the cheese.
11. The method of Claim 10, wherein for each grid unit a representative color is computed, and the representative color is compared against a predetermined color value to determine if the cheese occupies the grid unit, wherein the representative color is an average of pixel color values of pixels within each grid unit.
12. The method of any one of Claims 1 to 8, wherein the at least one database further stores a recipe comprising a pepperoni step for adding cheese over a pizza dough placed on the food preparation table, wherein the method further comprises: capturing, using the at least one camera, images of pizza preparation on the food preparation table performed by a person, and determining whether the cheese step is completed based on at least part of the captured images of pizza preparation; wherein determining completion of the pepperoni step comprises: processing an image of pizza preparation captured during the pepperoni step to compute a count of pepperoni slices placed over the pizza dough, and determining completion of the pepperoni step when the computed count of pepperoni slices is equal to or greater than a predetermined number.
13. A food preparation system comprising: a food preparation table; a pan array located next to the food preparation table; food pans arranged on the pan array; indicating lights configured to indicate predefined zones of the pan array; at least one camera configured to capture images of the pan array; at least one database storing data relating to predefined zones of the pan array and the indicating lights, wherein a predefined zone is preassigned to at least one indicating lights and further linked thereto such that the at least one database is to be referred to for linkage between a predefined zone and at least one indicating light preassigned thereto; and a computing system configured to cause the food preparation system to: capture, using the at least one camera, images of the pan array located next to the food preparation table such that the captured images feature food pans arranged on the pan array and ingredients contained therein, process the captured images to determine a location on the pan array of an ingredient featured on at least part of the captured images such that the ingredient is determined to be in one of the predefined zones of the array, update the at least one database to link the ingredient to at least one of the indicating lights that is preassigned to the determined one of predefined zones on the at least one database, repeat capturing images of the pan array, processing the captured images and updating the at least one database, and generate guidances for a person working at the food preparation table with reference to the at least one database; wherein if at a first time a first one of the ingredients is located in a first one of the predefined zones of the pan array, at a second time the first ingredient is located in a second one of the predefined zones of the pan array, and at a third time between the first time and the second time, the first ingredient is moved to from the first predefined zone to the second predefined zone, the computing system is configured to cause the food preparation system to update the at least one database such that at the first time the first ingredient is linked to a first indicating light preassigned to the first predefined zone on the at least one database, and at the second time the first ingredient is linked to a second indicating light preassigned to the second predefined zone on the at least one database, and further such that a first one of the guidances indicates the first ingredient using the first indicating light as the first ingredient is located in the first predefined zone and is linked to the first indicating light on the at least one database at the first time, and a second of the guidances indicates the first ingredient using the second indicating light as the first ingredient is located in the second predefined zone and is linked to the second indicating light on the at least one database at the second time.
14. The food preparation system of Claim 13, wherein the computing system is further configured to determine color information contained in the at least part of the captured images and to identify at least part of the ingredients based on the color information.
15. The food preparation system of any one of Claims 13 and 14, wherein the at least one camera is further configured to capture images of the food preparation table and food being prepared thereon, wherein the computing system is configured to determine completion of a food preparation step based on the captured images of the food being prepared on the food preparation table and further based on a predetermined completion criterion for the food preparation step.
16. The food preparation system of Claim 15, wherein the at least one database stores a recipe comprising a sauce step for spreading sauce on a pizza dough placed on the food preparation table, wherein the computing system is configured to:
-42- process an image of pizza preparation captured during the sauce step to identify a first group of pixels, each of which is located within an outer boundary of the pizza dough, process the image of pizza preparation or its modified version to identify a second group of pixels, each of which belongs to a sauce area where the sauce is applied over the pizza dough, and determine completion of the sauce step based on a count of pixels of the first group and further based on a count of pixels of the second group.
17. The food preparation system of Claim 15, wherein the at least one database stores a recipe comprising a cheese step for adding cheese over a pizza dough, wherein the computing system is configured to: overlay, on an image of pizza preparation captured during the cheese step, a grid pattern comprising a plurality of unit grids identify a first group of unit grids, each of which is located within an outer boundary of the pizza dough, identify a second group of unit grids, each of which belongs to a cheese area where cheese is applied over the pizza dough, and determine completion of the cheese step based on a count of unit grids of the first group and further based on a count of unit grids of the second group.
18. The food preparation system of Claim 15, wherein the at least one database stores a recipe comprising a pepperoni step for placing pepperoni slices over a pizza dough, wherein the computing system is configured to: process an image of pizza preparation captured during the pepperoni step to compute a count of pepperoni slices placed over the pizza dough; and determine completion of the pepperoni step when the computed count of pepperoni slices is equal to or greater than a predetermined number.
-43-
PCT/US2022/042372 2021-09-01 2022-09-01 Kitchen system with food preparation station WO2023034521A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US17/464,430 US11544925B1 (en) 2021-09-01 2021-09-01 Kitchen system with food preparation station
US17/464,405 US20230063320A1 (en) 2021-09-01 2021-09-01 Kitchen system with food preparation station
US17/464,405 2021-09-01
US17/464,430 2021-09-01

Publications (1)

Publication Number Publication Date
WO2023034521A1 true WO2023034521A1 (en) 2023-03-09

Family

ID=85411565

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/042372 WO2023034521A1 (en) 2021-09-01 2022-09-01 Kitchen system with food preparation station

Country Status (1)

Country Link
WO (1) WO2023034521A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170290345A1 (en) * 2016-04-08 2017-10-12 Zume Pizza, Inc. On-demand robotic food assembly and related systems, devices and methods
US20180257219A1 (en) * 2014-02-20 2018-09-13 Mbl Limited Methods and systems for food preparation in a robotic cooking kitchen
US20190200797A1 (en) * 2017-12-30 2019-07-04 Midea Group Co., Ltd Food preparation method and system based on ingredient recognition
US20200184530A1 (en) * 2018-12-07 2020-06-11 Decopac, Inc. Systems and Methods for Ordering and Preparation of Customized Comestibles

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180257219A1 (en) * 2014-02-20 2018-09-13 Mbl Limited Methods and systems for food preparation in a robotic cooking kitchen
US20170290345A1 (en) * 2016-04-08 2017-10-12 Zume Pizza, Inc. On-demand robotic food assembly and related systems, devices and methods
US20190200797A1 (en) * 2017-12-30 2019-07-04 Midea Group Co., Ltd Food preparation method and system based on ingredient recognition
US20200184530A1 (en) * 2018-12-07 2020-06-11 Decopac, Inc. Systems and Methods for Ordering and Preparation of Customized Comestibles

Similar Documents

Publication Publication Date Title
CN106155002B (en) Intelligent household system
US20210350136A1 (en) Method, apparatus, device, and storage medium for determining implantation location of recommendation information
WO2018165038A1 (en) Augmented reality-enhanced food preparation system and related methods
JP6918523B2 (en) A program that causes a computer to execute an information processing system, an information processing device, an information processing method, and an information processing method.
JP6444655B2 (en) Display method, stay information display system, display control device, and display control method
US20180082244A1 (en) Adaptive process for guiding human-performed inventory tasks
WO2020248458A1 (en) Information processing method and apparatus, and storage medium
US11562569B2 (en) Image-based kitchen tracking system with metric management and kitchen display system (KDS) integration
JP2024019591A (en) Information processing device, information processing system, control method, and program
JP7420077B2 (en) Information processing device, information processing method, and program
CN109166614A (en) A kind of system and method for recommending personal health menu
RU2679229C1 (en) Method and system of automated synchronization of the process of collecting of goods in a store on the basis of users orders
CN106462156A (en) Issue tracking and resolution system
US11544925B1 (en) Kitchen system with food preparation station
US20230063320A1 (en) Kitchen system with food preparation station
WO2023034521A1 (en) Kitchen system with food preparation station
WO2022144400A1 (en) Food processing system
US20220318816A1 (en) Speech, camera and projector system for monitoring grocery usage
CN108363851B (en) Planting control method and control device, computer equipment and readable storage medium
CN110110246B (en) Shop recommendation method based on geographic information grid density
US20230145313A1 (en) Method and system for foodservice with instant feedback
IL308975A (en) Using slam 3d information to optimize training and use of deep neural networks for recognition and tracking of 3d object
US11010903B1 (en) Computer vision and machine learning techniques for item tracking
US20220414803A1 (en) Storage tank management system and storage tank management method
CN114200851A (en) Data processing method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22865568

Country of ref document: EP

Kind code of ref document: A1