WO2022025282A1 - Learning control system - Google Patents

Learning control system Download PDF

Info

Publication number
WO2022025282A1
WO2022025282A1 PCT/JP2021/028444 JP2021028444W WO2022025282A1 WO 2022025282 A1 WO2022025282 A1 WO 2022025282A1 JP 2021028444 W JP2021028444 W JP 2021028444W WO 2022025282 A1 WO2022025282 A1 WO 2022025282A1
Authority
WO
WIPO (PCT)
Prior art keywords
learning
cooking
food
image
control system
Prior art date
Application number
PCT/JP2021/028444
Other languages
French (fr)
Japanese (ja)
Inventor
裕士 白木
ジュイ パン
佳慶 藺
Original Assignee
TechMagic株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TechMagic株式会社 filed Critical TechMagic株式会社
Publication of WO2022025282A1 publication Critical patent/WO2022025282A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Definitions

  • the present invention relates to a learning control system.
  • Patent Document 1 describes a learning device applied to cooking by a robot, which is provided with a relationship information learning unit that learns the relationship between biological information, emotional information, and environmental information by machine learning.
  • Patent Document 2 the detection information detected by the sensor that monitors the cooking state is acquired, and the appropriate information indicating the information indicated by the sensor when the cooking state is appropriate is acquired from the storage means.
  • a cooking system is described in which instruction information indicating an instruction to a cook is generated using the acquired detection information and appropriate information, and the instruction information is output to an output means.
  • Patent Document 3 an appropriate cooking time is given to the cook by learning using a set of cooking name, cooking process, ingredient name to be cooked, quantity, cutting method, cut size and cooking time as a teacher data table. A cooking support system that can be instructed is described.
  • Patent Document 3 The method described in Patent Document 3 is not universally usable for various stores because it is learning under limited conditions under a specific environment.
  • an object of the present invention is to provide a highly versatile learning control system that can respond to environmental changes based on information that can be reliably detected from the field such as a cooking environment, and a cook. It is to eliminate the variation in cooking quality due to the above, and to share the menu and make the cooking quality uniform.
  • the learning control system is (A) Demand forecast module for forecasting sales, (B) Image recognition module that recognizes objects, (C) A voice recognition module that recognizes the state of an object by voice, (D) A recommended module that recommends products to customers, or a recommended module (E) A machine learning module of at least one abnormality detection module for detecting an abnormal situation of a device is provided, and further, the abnormality detection module is provided. (F) An AI platform including a learning module that performs machine learning other than the above (a) to (e) and / or uses the information and results of at least one of the above (a) to (e). Have and It is characterized by performing learning control using a model trained by the AI platform.
  • the embodiment of the present invention it is possible to provide a learning control system that can respond to changes in the environment based on information that can be reliably detected from the site such as the cooking environment.
  • a learning control system that can respond to changes in the environment based on information that can be reliably detected from the site such as the cooking environment.
  • there is a difference in the quality of the dish according to the difference in the skill of the cook but by using such a learning control system, it is possible to eliminate the variation in the quality of the dish by the cook.
  • menus can be shared and cooking quality can be made uniform.
  • FIG. 17 is an explanatory diagram of the recognition result of sushi material.
  • FIG. 1 is a block diagram of the learning control system of the present embodiment.
  • the AI platform 10 is connected to the cooking robot 21, the business automation AI robot 22, and the AI restaurant 23 via the information network 20, and communicates with each other.
  • the cooking robot 21 automates a series of cooking processes in the kitchen, and is applicable to stores and retail stores.
  • the AI robot 22 automates simple tasks associated with cooking, and is expected to be used in factories outside stores, for example.
  • An AI restaurant is a restaurant that is automatically managed by AI trained by machine learning, and the level of automation can be freely selected, but if automation is advanced, it is possible to achieve a fully automated restaurant. It is possible. Actually, it is often implemented as a labor-saving restaurant that is fully automated during business hours and manually performs some adjustments or work such as maintenance.
  • the AI platform 10 uses various analysis data including AI training for cooking robot 21, business automation AI robot 22 and AI restaurant 23, collecting big data and training AI, and using various analysis data including AI output. It can provide information for the management and marketing of stores, factories and restaurants. For example, in the AI platform 10, deep learning is performed based on big data, and a trained model (trained model) is provided to the cooking robot 21, the business automation AI robot 22, the AI restaurant 23, and the like, thereby providing the AI restaurant 23. Image recognition, automatic cooking, cooking assistance, etc. using a trained model are possible in stores such as.
  • various data including these image data are analyzed on the AI platform in order to use it for the congestion situation in the store, store operation, marketing, etc.
  • the data analyzed by the AI platform includes, for example, store information including store name, number of store seats, number of visitors, time distribution of number of visitors, product name, price, cost, sales quantity for each product, provision for each product. Examples include product information including time, expiration date for each ingredient, purchase information, etc., image information and audio information related to the product and food, and their cooking.
  • store information including store name, number of store seats, number of visitors, time distribution of number of visitors, product name, price, cost, sales quantity for each product, provision for each product. Examples include product information including time, expiration date for each ingredient, purchase information, etc., image information and audio information related to the product and food, and their cooking.
  • customer attributes including customer age, gender, etc., visitor time distribution, staying time, etc. are image-recognized using image data of the store, and are automatically analyzed and visualized.
  • customer analysis we do not perform analysis that identifies individuals. The vast amount of data containing this information is also used for training various AIs on the AI platform.
  • the AI platform 10 includes an AI supply / demand forecasting unit 11, an AI automatic food loss forecasting unit 12, an AI customer analysis unit 13, an AI product analysis unit 14, an AI staff work analysis unit 15, an AI quality control unit 16, and the like. There is.
  • the AI platform 10 is composed of, for example, the modules (a) to (f) described later, and for example, the modules (a) to (f) are used for the analysis in each of the analysis units 11 to 16.
  • AI can predict the number of visitors, the number of sales, etc. by machine learning from information such as past results, weather, events, promotions, other store information, etc., and can visualize the results.
  • the AI automatic food loss forecasting unit 12 predicts and calculates the required raw materials and the number of staff based on the demand forecast, reduces the shift creation work and purchasing work of the store manager, avoids overemployment of human resources, and reduces food loss.
  • the camera with AI function automatically recognizes the customer's attributes, analyzes and visualizes the customer's gender, age group, visitor time zone, staying time, etc., and these analysis information is used for marketing. ..
  • the AI product analysis unit 14 analyzes and visualizes product trends based on customer attributes, regions, etc. in product purchases, and these analysis information is used for product development.
  • the AI staff work analysis unit 15 can analyze and visualize the work time by associating the work of the store staff with the order menu, and can improve the store turnover rate by using the analysis information.
  • the AI quality control department 16 analyzes the cooking process to pursue deliciousness, to make the taste uniform regardless of the cook / store, and to optimize the cooking procedure.
  • HACCP Hazard Analysis and Critical Control Point: process control to ensure the safety of food, etc.
  • deep learning is used, for example, for the analysis of the cooking process in the AI quality control unit 16, and training based on image information, audio information, etc. regarding products and foods and their cooking is performed. ..
  • the learning control system of the present embodiment includes the following steps, and machine learning by the AI platform is possible.
  • Store reservation (automatic restaurant management, remote reservation, seat information)
  • Order / payment (customized order and remote payment; recommended menu proposal, nutritional management, food allergies, seasoning preferences, ingredients / toppings)
  • Ingredients arrangement (central kitchen, preparation, cutting)
  • Supply of ingredients (preservation according to ingredients)
  • Cooking recipe, heating, mixing
  • Filling (automation of filling)
  • Providing and serving (automatic transportation)
  • Lower set / washing handling of tableware by image recognition)
  • Preparation purchasing based on demand forecast, material replenishment, management / analysis)
  • a configuration example including the following (a) to (f) can be constructed.
  • Baking, baking time for example, image recognition or thermography may be used to determine when to turn the steak over.
  • D) Boiled cooking and frying Victory adjustment and time adjustment for example, the boiled state and fried state are judged by recognition of images and sounds).
  • E) Ingredient recognition for example, by recognizing from images and sounds, the type and state of ingredients are recognized and an appropriate cooking method is determined.
  • F) Recognition of cooking condition for example, by recognizing images and sounds, the degree of heating, heating time, etc. are managed).
  • LSTM Long short-term memory
  • D Recommended module: A favorite is recommended based on past data, and unsupervised learning is adopted, for example, in order to find a person's characteristics and show a tendency, although not particularly limited. Has a recommendation function.
  • E Abnormality detection module: Anomaly detection module for tempura oil is alerted at ⁇ 10 degrees to notify the abnormal situation of equipment. On the factory line, an alert is issued even if one out of 10,000 defects occurs. Strict management is required for safety management. Although not particularly limited, for example, unsupervised learning is performed. Alert only when the threshold is removed.
  • (F) Learning module (three types of learning): Machine learning other than the above (a) to (e), or using the information and results of the above (a) to (e), or these information Perform machine learning that combines the results.
  • Machine learning includes, for example, regression
  • unsupervised learning includes, for example, LSTMs
  • reinforcement learning includes segmentation (eg, recognition of sushi material).
  • Reinforcement learning is also suitable for robot learning.
  • Reinforcement learning is learning based on limited data, and typical methods include TD learning and Q-learning. It is possible to adopt deep learning in any module.
  • AI quality control 16 the state, size, volume, and thickness of fats and oils and lean meat are recognized, and the degree of heating, baking time, and timing of turning over are optimized by supervised learning.
  • the fried condition can be optimized by supervised learning from the sound of oil and the like. Furthermore, not only the cooking time (fried time, baking time, boiling time, etc.) but also the maintenance time can be notified. In addition, it is possible to cook by taking into account the timing of provision according to the demand according to the demand forecast.
  • the sensor learning and recognition are performed based on an image sensor, a voice sensor, a document, and the like. Further, it is also possible to provide a temperature sensor, a weight sensor and the like. For example, the remaining quantity and quality of the soup bar can be estimated from the image, temperature, weight and the like.
  • the AI platform in the case of a franchise chain, it is possible to use the relationship between the headquarters and the stores, that is, the model in which the data is aggregated and learned at the headquarters at each store.
  • the products provided to the customer are not particularly limited, but various foods and drinks such as, for example, eat-in menu, take-out menu, food, soft drink, alcoholic drink, hot drink, and cold drink. Includes menu.
  • the stores are not particularly limited, but for example, restaurants, mobile stores, delivery stores, temporary stores, eat-in corners, food courts, accommodation facilities, schools, hospitals, cafeterias, supermarkets, department stores, mass retailers. , Stores, and convenience stores, etc., are included.
  • CNN For deep learning, for example, CNN is used, and the signal is not particularly limited, but for example, as preprocessing, the teaching data is inflated by labeling and 2D conversion, and signals such as RGB series signals, HSV series signals, and infrared image signals are used. By performing the processing, it is possible to improve the learning efficiency and the accuracy.
  • Examples 1 to (Example 9) will be described as specific examples of machine learning.
  • Example 1 In this embodiment, the process of optimizing the cooking of steak by the AI quality control unit 16 will be described.
  • the ratio of lean meat and fat content for steak, its calories, etc. are judged from the image, and the condition of the meat and the degree of baking are adjusted from the difference in color by image recognition for the image data input from the camera.
  • image recognition it is possible to eliminate the difference in taste depending on the store and the variation in the baking condition depending on the cook, and to standardize.
  • Optimal cooking of ingredients based on past cooking data and at any store.
  • the state of raw meat is judged and optimal cooking is performed.
  • the steak cooking process is photographed with two cameras.
  • GoPro can be used as a camera.
  • the acquired image is labeled with the type of operation (inside-out flip, input put_in, no operation none), start point, and end point, and the labeled moving image is converted into multiple still images.
  • the image size can be 256 ⁇ 455 pixels.
  • the mean and standard deviation of the converted image pixels are calculated.
  • the ratio of the training list is 2/3 and the ratio of the verification list is 1/3.
  • Verification The number of images of the verification data was increased by using a method of cutting out from the center, a method of normalizing with the average value and the standard deviation, and the like. In addition, as a method for generating time-series images, 16 continuous images were cut out in order on the time axis, and if the number was less than 16, the last image was copied and added up to 16.
  • FIG. 2 shows the determination results of the meat feeding time and the meat turning over time
  • FIG. 2A correctly determines the feeding time (Put_in, 0.927)
  • FIG. 2B correctly determines the turning over time (flip, 0.907).
  • the accuracy of the judgment was 97.78%.
  • FIG. 3 is the result of analyzing the number of operations, and the timing of putting in and the timing of turning over are appropriately determined.
  • FIG. 4 shows the results of analyzing the pixels of the meat image, and the grayscale, red, green, and blue image data were analyzed.
  • FIG. 4A shows a color image
  • FIG. 4B shows a grayscale image.
  • FIG. 4C shows the analysis result of the image by the gray scale
  • FIG. 4D shows the analysis result of the image by the red component
  • FIG. 4E shows the analysis result of the image by the green component
  • FIG. 4F shows the analysis result of the image by the blue component.
  • the state of meat can be image-recognized by AI trained based on each image and the results of each of these analyzes.
  • the state of the meat can be discriminated by the image recognition of FIG. 4, and the baking condition can be adjusted based on the discriminating result.
  • the condition of meat can be determined based on the balance between lean and fat, the distribution of fat, the color of lean, the color of fat, the volume of meat, the thickness of meat, the shape of meat, the distribution of fat, etc. Since it is possible to calculate the calories of the meat, adjust the baking condition, grasp the part to start baking, the part to be baked well, etc., it is possible to provide the steak with the optimum baking condition according to the condition of the meat.
  • the cooking robot can cook the steak in the optimum baking condition based on the information from AI. This makes it possible to provide uniform and high-quality steaks anytime, anywhere, without depending on the store or the cook.
  • FIG. 5 is an explanatory diagram of time-series image analysis of the grilled color of the surface of the steak.
  • the steak meat (beef) portion is extracted from the cooked image by object recognition by the same deep learning as described above, and the grilled color of the beef surface is analyzed by time-series image.
  • FIG. 5A is an image in which beef has begun to be grilled
  • FIG. 5D is an image in which beef is grilled in the order of FIGS. 5A, 5B, 5C, and 5D
  • FIG. 5D is a baked image.
  • FIG. 5E is a graph showing the change in brightness of beef as it is cooked and until it is cooked. The vertical axis of FIG.
  • 5E is the brightness in the color expression by the HSV model, and the horizontal axis is the cooking time.
  • FIG. 6 is an explanatory diagram of image recognition for analyzing the balance between lean meat and fat of steak and estimating the position of cutting into equal parts by volume.
  • the gray color is the lean beef portion for steak
  • the black color is the lipid portion
  • the vertical line is the cut line indicating the cutting position where the volume is equally divided.
  • the weight and volume It is possible to recognize Meat Volume, Area, Fat Rate, and Calorie.
  • the optical axis of the camera is installed so that it is within the range of inclination of about 20 degrees with respect to the perpendicular line from the observation target. It is desirable to do.
  • FIG. 7 is an explanatory diagram showing a state in which the same deep learning as described above is performed using the image of the thermography camera.
  • a thermography camera is used as a camera, it is possible to capture and analyze not only the color change of the surface but also the temperature change, so that it is easier to more appropriately teach a cooking method equivalent to that of a skilled person.
  • the average temperature is 53.85 ° C.
  • the temperature variation value is 2.56 ° C.
  • FIG. 7B has an average temperature of 52.49 ° C and an STD of 20.65 ° C. When the STD is large, it is an indicator that the unevenness of burning is large.
  • the center of the frying pan outputs the teaching content such as adjusting the cooking target on the frying pan, here the position where the beef for steak is placed, as the output of the deep learning system. Then, according to this teaching content, cooking can be performed so that the STD is within a predetermined allowable value.
  • beef has been described as an example of meat for steak, but the application of this example is not limited to meat for steak, and other meat, fish, vegetables, etc., for example. It can be applied to cooking and ingredients as well.
  • Example 2 In this embodiment, a system for giving advice on the boiling time of noodles, the temperature of hot water, and the degree of heating will be described by analyzing the cooking process in the AI quality control unit 16.
  • boiling noodles is exemplified, but the present embodiment is not limited to this, and when a heating medium other than hot water is used, a cooking utensil using various heating media such as a fryer using oil, for example. Applicable to. If necessary, analysis by AI demand forecasting unit 11, AI customer analysis unit 12, and the like is also performed. (1) For store reservations, automatic restaurant management, remote reservations, and seat guidance will be performed. (2) For orders and payments, customized orders and remote payments are possible, and recommended menu proposals, nutritional management, food allergies, seasoning preferences, ingredients and toppings are possible.
  • the demand forecast module predicts sales according to the date, weather, events, etc., and the recommended module recommends favorite menus that correspond to customers based on past customer data.
  • the central kitchen will be prepared with cuts and the like.
  • the supply of foodstuffs store them according to the foodstuffs.
  • the image recognition module recognizes the object
  • the voice recognition module recognizes the cooking status, etc.
  • the abnormality detection module notifies the abnormal situation of the equipment, etc. Learn based on limited data with the reinforcement learning module.
  • the filling will be automated.
  • Fully automatic transportation will be provided and served. The congestion status of stores can be grasped in advance by the demand forecast module.
  • tableware will be handled by image recognition.
  • image recognition For tableware and washing, tableware will be handled by image recognition.
  • the demand forecast module forecasts sales based on the date, weather, events, etc.
  • the image recognition module recognizes objects such as the type, shape, and size of foodstuffs.
  • Example 3 In this embodiment, an example of a salad bar will be described, but the present embodiment is not limited to the salad bar, and can be applied to serving dishes such as a soup bar, a buffet style, and a buffet style.
  • AI platform By using the AI platform, it is possible to appropriately determine the remaining amount of food and the state of food, and perform replenishment, serving, serving, and the like. If necessary, analysis by AI demand forecasting unit 11, AI food loss forecasting unit 12, AI customer analysis unit 13, AI product analysis unit 14, AI quality control unit 16, and the like is also performed. (1) For store reservations, automatic restaurant management, remote reservations, and seat guidance will be performed. Forecast sales with the demand forecast module.
  • the image recognition module recognizes objects such as the situation in the store, the number of customers, and the condition, and grasps the availability of seats in the store. With the voice recognition module, not only the situation of the audience seats but also the cooking situation can be recognized. (2) Customization orders and remote payments are possible for orders and payments. Recommended menu suggestions, nutritional management, food allergies, seasoning preferences, ingredients and toppings are also possible. The demand forecast module also forecasts sales. (3) Regarding the arrangement of ingredients, the central kitchen will be prepared with cuts and the like. The image recognition module recognizes objects such as the type and quantity of foodstuffs, and the voice recognition module also recognizes store conditions, cooking conditions, etc., enabling appropriate food arrangement according to the situation. (4) Regarding the supply of foodstuffs, store them according to the foodstuffs.
  • the image recognition module recognizes objects such as the type and amount of foodstuffs, and also uses the voice recognition module to recognize the store status and cooking status.
  • the image recognition module recognizes the object, the voice recognition module recognizes the cooking status, etc., and the abnormality detection module detects abnormal situations such as equipment. Notify and learn based on limited data in the reinforcement learning module.
  • the image recognition module recognizes objects such as store status during automatic transportation, and the voice recognition module recognizes not only store status but also cooking status.
  • the image recognition module recognizes the object such as the tableware, and the voice recognition module recognizes the store situation and the cooking situation.
  • the demand forecast module predicts sales based on the date, weather, events, etc., and the image recognition module recognizes objects such as the situation of the audience seats.
  • Example 4 In this embodiment, an example of an automatic restaurant will be described. In the automatic restaurant, analysis is performed by AI demand forecasting unit 11, AI food loss forecasting unit 12, AI customer analysis unit 13, AI product analysis unit 14, AI staff work analysis unit 15, AI quality control unit 16, etc., as necessary. It will be done.
  • AI demand forecasting unit 11 AI food loss forecasting unit 12, AI customer analysis unit 13, AI product analysis unit 14, AI staff work analysis unit 15, AI quality control unit 16, etc., as necessary. It will be done.
  • the demand forecast module predicts sales, the image recognition module grasps the number and status of customers in the store by object recognition, the number of vacant seats is automatically determined, and the voice recognition module uses only the seat status. It also recognizes the cooking situation.
  • customize orders and remote payments recommend menu proposals, nutritional management, food allergies, seasoning preferences, ingredients and toppings.
  • Sales can be grasped in advance with the demand forecast module.
  • guests will be greeted at the entrance and guidance will be provided automatically to the seats.
  • the central kitchen will be prepared with cuts and the like.
  • the image recognition module recognizes objects such as foodstuffs, and the voice recognition module recognizes cooking conditions.
  • the image recognition module recognizes objects such as the type and amount of foodstuffs, and also uses the voice recognition module to recognize the store status and cooking status.
  • the image recognition module recognizes the object, the voice recognition module recognizes the cooking status, etc., and the abnormality detection module notifies the abnormal situation of the equipment, etc.
  • the filling will be completely automated.
  • the image recognition module recognizes objects such as dishes and dishes, the voice recognition module recognizes cooking conditions, etc., the abnormality detection module notifies abnormal situations of equipment, etc., and the reinforcement learning module is limited. You can learn based on the data.
  • the image recognition module recognizes objects such as store status during automatic transportation, and the voice recognition module recognizes not only store status but also cooking status. Furthermore, it is possible to detect that the audience seats are not cold and add water to the empty glass.
  • the image recognition module recognizes the object such as the tableware, and the voice recognition module recognizes the store situation, cooking situation, etc. ..
  • the prepared dishes are automatically washed by the automatic washing robot. Furthermore, it is determined whether the customer is out of the seat, temporarily left to go to the bathroom, or returned. For returning guests, guide them to the exit and see them off. (9) Regarding preparations, we will purchase, replenish materials, manage and analyze by demand forecast.
  • the demand forecast module predicts sales based on the date, weather, events, etc., and the image recognition module recognizes objects such as the situation of the audience seats.
  • the automatic restaurant of the present embodiment is equipped with an automatic food serving system, accepts food orders according to orders from customers' mobile terminals, etc., and automatically cooks food according to the recipe and cooking method according to the order. And you can serve food to your customers.
  • the automated restaurant of the present embodiment automatically grasps the vacant seat status in the store, allows the customer to confirm the vacant seat remotely, allows the customer to reserve a seat remotely, and allows the customer to reserve a seat remotely. Orders can be placed, food is served to customers automatically, payment of charges is possible from the customer's terminal, serving is automated, and collection of served containers after use is automated.
  • the cleaning of the serving container after use is automated
  • the replenishment of the serving container after cleaning is automated
  • the cleaning of the cooking container after use is automated
  • the customer's seat in the store The guidance to customers is automated, the serving of food to customers is automated, the prediction of orders from customers is automated, and the ordering for purchasing ingredients is automated. It is possible that the delivered ingredients are automatically replenished in the feeder.
  • An example of using an image pickup device will be described as an example of a means for automatically grasping the vacancy status in the store.
  • An image pickup device such as a camera is installed in the store to take pictures of the audience seats in the store. By recognizing the captured image, it is possible to automatically and in real time grasp the vacancy status in the store. As a result, vacant seats in the store can be automatically determined and the vacant seat information can be reflected in the reservation system.
  • customers can check the vacant seats of the store in real time at any time and reserve the desired seats, and can also reserve the seats and order the food at the same time. ..
  • the food serving time can be adjusted according to the reservation time, it is possible to serve the ordered food to the customer at an appropriate timing after entering the store.
  • the image pickup means an appropriate means such as a black-and-white camera, a color camera, an infrared camera, and a video camera can be used.
  • the vacant seat can be determined not only by image recognition by moving images but also by image recognition of still images at predetermined intervals. Furthermore, by analyzing the food provided to the customer by image recognition, it is possible to grasp the pace at which the customer recommends the meal. Therefore, the automatic food serving system 1 can serve food at the pace of the customer's meal, and can recommend the order of additional food (including dessert and drink) via the mobile terminal. can.
  • the system 1 recognizes the customer by holding it over the reading device in the above, or by authenticating the ID code by communication such as short-range wireless communication from the customer's mobile terminal.
  • the customer's reserved seat is displayed on the customer's mobile terminal or the display device of the store, the customer can recognize the seat reserved by the customer.
  • the automatic serving machine 21 can be used to guide the customer to the reserved seat.
  • the customer notifies the system 1 by wireless communication at the entrance of the store that the customer has visited the store by a mobile terminal, or by communication such as short-range wireless communication from the customer's mobile terminal.
  • the system 1 grasps the customer's ID, confirms the past visit history, etc., and proposes an appropriate vacant seat to the customer by the customer's mobile terminal or the display device of the store.
  • the customer approves the seat the customer's seat is determined, and the customer is guided to the reserved seat by using the mobile terminal, the display device in the store, and the automatic serving machine 21. Guides customers to their seats. Food orders can be placed using the customer's mobile terminal, the store's dedicated terminal, or the like.
  • the system 1 When there are no vacant seats, the system 1 presents the expected waiting time, the number of people waiting, etc. to the customer by displaying the waiting time on the customer's mobile terminal or displaying it on the display device of the store. , Ask for an answer as to whether the customer is waiting for a seat. If the customer chooses to wait for a vacant seat, the customer will be notified of the waiting status at any time, and information such as recommended menus and store introductions will be provided. According to the above, when it is the customer's turn, the seat will be guided to the vacant seat as described above.
  • payment in the case of an order from a mobile terminal, payment can be made electronically from the mobile terminal.
  • Electronic payment is possible from the customer's mobile terminal even when placing an order using a store's dedicated terminal or voice recognition device, but if the store is equipped with an automatic payment device, the customer can use the automatic payment device.
  • Cash card, credit card, cash, electronic money, prepaid card and other appropriate payment methods can be used for payment.
  • a gate is provided at the exit of the store so that the customer can open the gate by presenting a predetermined ID, for example, by presenting the customer's ID by short-range wireless communication using a mobile terminal, the fee is unpaid.
  • Customers can be identified and the opening and closing of gates can be controlled, and voice or display can be used to encourage unpaid customers to pay. If a system that pays the fee in advance is adopted, it is possible to prevent the customer from leaving the store without paying the fee.
  • order data of each customer In order to predict the number of visitors to the store and the order of each menu, order data of each customer, past order history data, information from related systems such as other stores, information of information research organizations, and information on the Internet Etc. are also available, and it is possible to analyze such a large amount of information by machine learning using artificial intelligence, for example.
  • the inventory of foodstuffs, etc. corresponding to the predicted order is managed according to the inventory management of the system 1 according to the contents of the order.
  • the delivered inventory is managed in a predetermined stock place, and is appropriately supplied to the food material supply device 13, the noodle supply device 10, and the like.
  • the replenishment of foodstuffs or noodles to the foodstuff supply device 13 and the noodle supply device 10 and the like can be automated by using, for example, an automatic serving machine 21.
  • Example 5 In this embodiment, the types of sensors are illustrated.
  • the state of foodstuffs and cookers can be determined by sensors that analyze sugar content, taste, weight, salt content, hardness (pressure sensor), temperature, humidity, time, and the like. Also in image recognition, for example, by determining color change, deliciousness can be detected. In addition, for example, the quality can be judged by the elapsed time after replenishing the soup.
  • These sensors can be used for analysis in AI demand forecasting unit 11, AI food loss forecasting unit 12, AI customer analysis unit 13, AI product analysis unit 14, AI staff work analysis unit 15, AI quality control unit 16, etc., as necessary. Used.
  • Example 6 By recognizing the movement of a person (cooker) as an image, it is possible to properly advise the cook on the state of cooking. If necessary, the AI staff work analysis unit 15 and the AI quality control unit 16 and the like perform analysis.
  • Example 7 Image recognition of foods displayed at supermarkets to grasp the sales situation. In addition to the remaining amount of products displayed in the store, the condition of food is also determined. If necessary, analysis is performed by AI demand forecasting unit 11, AI food loss forecasting unit 12, AI customer analysis unit 13, AI product analysis unit 14, AI quality control unit 16, and the like.
  • Example 8 In this embodiment, an example will be described in which lettuce or cabbage is recognized by deep learning by analysis of the cooking process in the AI quality control unit 16, and only the core portion is determined and separated. Deep learning was used for image recognition of cabbage or lettuce.
  • ResNet18 was used as the core network
  • CNN was used as the basic feature extractor
  • PyTorch and PyTorchVision were used as the framework.
  • As the image for training we used ball lettuce purchased in Japan.
  • (1) Data set generation Labeling was performed on the images (73 sheets) of ball lettuce. Specifically, the photos of ball lettuce are manually labeled and formatted for the custom data set. Specifically, as shown in FIG. 8, the positions of the lettuce cores are identified by two points. 8A-8D show four images as an example of leveling ball lettuce. The two points that identify the lettuce core are a pair of diagonal vertices of a rectangle circumscribing the lettuce core.
  • FIG. 10 is an explanatory diagram of an output image of lettuce image recognition. The four points around the lettuce core correspond to the four vertices of the bbox rectangle, and the central point is the point of contact of the two diagonals of the bbox.
  • FIG. 11 is an explanatory diagram of a line for cutting lettuce.
  • the line to cut the lettuce core is determined as shown in FIG. First, draw an ellipse through the four vertices of bbox. Next, draw a rhombus with two pairs of tangents to the ellipse. Then cut the lettuce in half along one of the diagonals of the diamond. Remove the core from the half lettuce by cutting twice along the two sides of the rhombus for each half lettuce cut in half.
  • Example 8 Another Example of Example 8 (Example of recognizing the shape of the lettuce core)
  • the four vertices of the bbox are output, but in another embodiment, an example of recognizing the shape of the lettuce core will be described.
  • ResNet50 + FPN was used as the core network
  • the mask was set to conv and the box was set to FC as the head.
  • CNN was used as the basic feature extractor
  • Detectron2 was used as the framework.
  • FIG. 12 is an explanatory diagram of lettuce labeling. As shown in FIG. 12, the photographs of ball lettuce are manually labeled and formatted. FIG. 12A is a learning photograph before labeling, and FIG. 12B is a learning photograph after labeling. In FIG. 12B, the core portion is marked with an irregular shape. The output is a polygon that combines multiple line segments and is indeterminate.
  • FIG. 13 is an explanatory diagram of the training result of lettuce image recognition.
  • the solid line is the training loss coefficient
  • the broken line is the evaluation loss coefficient.
  • FIG. 14 is an explanatory diagram of the correct answer rate of lettuce image recognition.
  • IoU Intersection over Union
  • the correct answer rate was 0.956
  • the IoU threshold was 1.0 (All)
  • the correct answer rate was 0.796.
  • IoU threshold is the ratio of the area of the area of the output mask that overlaps the area of the mask in the teaching photograph.
  • FIG. 15 is an explanatory diagram of an output image of lettuce image recognition. Four output images are exemplified in FIGS. 15A to 15D. Each image is masked with its core surrounded by an irregular shape, and a rectangular box is displayed so as to circumscribe this mask.
  • FIG. 16 is an explanatory diagram of a method for determining a cutting line for cutting a core of cabbage or lettuce.
  • First with reference to the predicted center of the box, extend the diagonal length by 5% at both ends.
  • Next draw an enlarged box and move on to the cutting process.
  • the cutting process first halves the cabbage or lettuce along the diagonal of one of the box's rectangles. Cut every half twice along each side of the rectangle and cut the core.
  • the method for discriminating the separated portion of the present embodiment is not limited to cabbage or lettuce, and other than vegetables, meat, fish and the like. It can also be applied to the separation of ingredients and dishes.
  • Example 9 a process of recognizing an image of sushi ingredients by analyzing the cooking process in the AI quality control unit 16 and arranging them neatly in a container or a container will be described.
  • the shape of sushi ingredients is indefinite.
  • the material is recognized by image recognition from the hue. Recognize the type and number of sushi. For example, image recognition of shrimp and eggs is performed by supervised learning.
  • Deep learning was used for image recognition of sushi.
  • ResNet50 was used as the core network
  • CNN was used as the basic feature extractor
  • Detectron2 was used as the framework.
  • (1) Data set generation Labeling was performed on sushi images (80 sheets). Specifically, the photos of sushi are manually labeled and converted to COCO format for the custom data set. In addition, the image is subjected to 2D conversion as preprocessing, for example, processing such as randomly scaling ratio (0.5 to 1.5), randomly cutting out a part of the image at multiple points, randomly rotating, and further lighting. The number of images was increased (increased from 80 to about 300) by normalizing with the mean value and standard deviation in consideration of the above conditions.
  • FIG. 17 is an explanatory diagram of the recognition results of sushi ingredients.
  • the correct answer rate for determining the type of sushi material was 0.75 with an IoU threshold and 96.02% when the maximum number of bboxes was 100.
  • each sushi material is surrounded by a square marker, and the state in which the image is recognized is shown. Sushi is often similar in shape and color, but the AI trained as described above accurately recognizes the sushi material.
  • Sushi image-recognized in this way is served on a container or container in consideration of color and shape.
  • AI trained with good assortment information, information on avoidable assortment examples, etc. is used to judge the degree of suitability of assortment, and appropriate assortment information is obtained from AI. It can also be output.
  • Example 9 Another Example of Example 9 It is necessary to distinguish between lean tuna, medium fatty tuna, and large fatty tuna in sushi ingredients, but for cooks, a lot of knowledge and skill are required to determine the physical condition of tuna. Cooking skills are required. Therefore, here we explain how to distinguish red tuna, medium fatty tuna, and large fatty tuna by deep learning.
  • ResNet18 was used as the core network
  • CNN was used as the basic feature extractor
  • PyTorch was used as the framework.
  • sushi images 80 sheets were labeled in the same manner as in (1) above.
  • the sushi photos are manually labeled and converted to VOC format for the custom dataset.
  • the image is subjected to 2D conversion processing as the preprocessing in the same manner as in (1) above, and further normalized by the mean value and standard deviation in consideration of the lighting conditions, and the number of images is increased (80). 2D conversion to increase from 1 sheet to about 300 sheets). However, if the lighting conditions are the same, normalization is not necessary.
  • Inventory notification assistant lower set notification assistant, visitor notification and demand forecasting assistant, menu recommendation assistant, etc. are used to recognize the hall situation.
  • the AI demand forecasting unit 11 AI food loss forecasting unit 12, AI customer analysis unit 13, AI product analysis unit 14, AI staff work analysis unit 15, and AI quality of the AI platform 10 described in the first embodiment.
  • a trained model is used in the management unit 16 and the like.
  • the staff did not frequently confirm the lack of inventory of self-service salad bars, etc., leading to complaints. Therefore, it is possible to optimize the inventory by recognizing the inventory status of the salad bar and notifying the store manager or the staff before the inventory status becomes insufficient. This is expected to improve turnover and customer satisfaction.
  • the lower set notification assistant can notify the seats that require the tableware lower set, thereby maximizing the store turnover rate and improving sales. As a result, the turnover rate and sales can be improved.
  • the visitor notification and demand forecast assistant can reduce opportunity loss and food loss by notifying the customer's waiting status and demand from the past, and can improve customer satisfaction and reduce opportunity loss and food loss. ..
  • the clerk may not be aware that the customer does not have enough drinks. Therefore, the menu recommendation assistant can reduce the opportunity loss, maximize the sales, and at the same time improve the customer satisfaction by notifying the store manager or the staff when the customer's glass is empty or low.
  • a cooking reproduction assistant, etc. is used to recognize the cooking situation in the kitchen.
  • the deliciousness of the menu offered differed depending on the cook, which sometimes led to complaints. Therefore, with the cooking reproduction assistant, AI can learn the cooking method of a senior cook and teach the cooking method so that even an apprentice cook can reproduce it. For example, by adopting the example of cooking the steak of the first embodiment of the first embodiment, it is possible to improve customer satisfaction and maintain and improve the brand power by making the taste uniform.
  • functions such as 16 it is possible to provide the AI assistant solution and store optimization platform as described above.
  • the cooking reproduction assistant teaches the cooks of each store the appropriate cooking method according to the store situation by using the trained model trained by the cooking method of the senior cook in the AI quality control unit 16. be able to. This makes it possible to provide uniform dishes at each store, which are similar to those cooked by senior cooks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Strategic Management (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Provided is a learning control system that, on the basis of information that can be reliably detected from a work site such as a cooking environment, can respond to changes in the environment as well, and is highly versatile. A learning control system according to an embodiment of the present invention is characterized by comprising one or more of: (a) a demand prediction module that predicts sales; (b) an image recognition module that recognizes an object; (c) an audio recognition module that recognizes the state of the object using audio; (d) a recommendation module that recommends a product to a customer; and (e) a fault detection module that detects a fault state of an device, the learning control system further comprising (f) a learning module that performs machine learning on the basis of information from the one or more modules, and the modules being controlled by a system trained by the learning module.

Description

学習制御システムLearning control system
 本発明は、学習制御システムに関する。 The present invention relates to a learning control system.
 近年、飲食店における従業員不足等の背景から、飲食店におけるサービスの工程を自動化したいという要請があり、従来から特許文献1~特許文献3のような技術が提案されている。 In recent years, due to the background of the shortage of employees in restaurants, there has been a request to automate the service process in restaurants, and technologies such as Patent Documents 1 to 3 have been proposed conventionally.
 特許文献1には、生体情報と感情情報と環境情報との間の関係性を機械学習によって学習する関係性情報学習部を備えた、ロボットによる料理に適用する学習装置が記載されている。 Patent Document 1 describes a learning device applied to cooking by a robot, which is provided with a relationship information learning unit that learns the relationship between biological information, emotional information, and environmental information by machine learning.
 特許文献2には、調理の状態を監視するセンサにより検出された検出情報を取得し、当該調理の状態が適正であるときに当該センサが示す情報を表す適正情報を記憶手段から取得し、当該取得した検出情報と適正情報とを用いて調理者に対する指示を表す指示情報を生成し、当該指示情報を出力手段に出力する調理システムが記載されている。 In Patent Document 2, the detection information detected by the sensor that monitors the cooking state is acquired, and the appropriate information indicating the information indicated by the sensor when the cooking state is appropriate is acquired from the storage means. A cooking system is described in which instruction information indicating an instruction to a cook is generated using the acquired detection information and appropriate information, and the instruction information is output to an output means.
 特許文献3には、教師データテーブルとして料理名、調理工程、調理対象の食材名、分量、カット方法、カットサイズ及び調理時間の組を用いて学習することにより、適切な調理時間を調理者に指示することができる調理支援システムが記載されている。 In Patent Document 3, an appropriate cooking time is given to the cook by learning using a set of cooking name, cooking process, ingredient name to be cooked, quantity, cutting method, cut size and cooking time as a teacher data table. A cooking support system that can be instructed is described.
特開2020-017104号公報Japanese Unexamined Patent Publication No. 2020-017104 特開2011-058782号公報Japanese Unexamined Patent Publication No. 2011-058782 特許第6692960号公報Japanese Patent No. 6692960
 上記特許文献1に記載の学習装置では生体情報、感情情報及び環境情報という3つの情報が調理ロボットの機械学習のために不可欠であるが、これらの情報を取得する情報取得手段は複雑な構成となってしまい、また、正確に情報を取得することが困難である。また、調理ロボット用の学習装置であり、人間の調理への適用はできない。 In the learning device described in Patent Document 1, three pieces of information, biological information, emotional information, and environmental information, are indispensable for machine learning of a cooking robot, but the information acquisition means for acquiring these information has a complicated configuration. In addition, it is difficult to obtain accurate information. Moreover, it is a learning device for cooking robots and cannot be applied to human cooking.
 上記特許文献2に記載の方法では、調理の状態が適正であるとの判断や指示内容は、調理環境を考慮したものではなく、一律に定められた特定の条件を前提としているため、実際の店舗における環境の変化への対応が困難である。 In the method described in Patent Document 2, the judgment that the cooking condition is appropriate and the content of the instruction do not take into consideration the cooking environment, but are premised on specific conditions uniformly determined. It is difficult to respond to changes in the environment at stores.
 上記特許文献3に記載の方法では、特定の環境下における限定された条件の下での学習であるため、さまざまな店舗に対して汎用的に利用できるものではない。 The method described in Patent Document 3 is not universally usable for various stores because it is learning under limited conditions under a specific environment.
 上記問題点に鑑み、本発明の目的は、調理環境等、現場から確実に検出できる情報に基づき、環境変化にも対応可能であり、かつ汎用性の高い学習制御システムを提供すると共に、調理者による料理品質のばらつきを解消し、さらに、メニューの共有化と料理品質の均一化を図ることである。 In view of the above problems, an object of the present invention is to provide a highly versatile learning control system that can respond to environmental changes based on information that can be reliably detected from the field such as a cooking environment, and a cook. It is to eliminate the variation in cooking quality due to the above, and to share the menu and make the cooking quality uniform.
 本発明の上記目的は、以下の構成によって達成できる。すなわち、本発明の第1の態様の学習制御システムは、
(a)売り上げの予測をする需要予測モジュール、
(b)物体を認識する画像認識モジュール、
(c)物体の状態を音声により認識する音声認識モジュール、
(d)顧客へ商品を推奨する推奨モジュール、又は、
(e)機器の異常事態を検知する異常検知モジュール
の少なくとも1つの機械学習モジュールを備えると共に、さらに、
(f)上記(a)~(e)以外の機械学習を行うこと、及び/又は、上記(a)~(e)の少なくとも1つのモジュールの情報や結果を利用する学習モジュールを備えるAIプラットフォームを有し、
 前記AIプラットフォームにより訓練されたモデルを用いた学習制御を行うことを特徴とする。
The above object of the present invention can be achieved by the following configuration. That is, the learning control system according to the first aspect of the present invention is
(A) Demand forecast module for forecasting sales,
(B) Image recognition module that recognizes objects,
(C) A voice recognition module that recognizes the state of an object by voice,
(D) A recommended module that recommends products to customers, or a recommended module
(E) A machine learning module of at least one abnormality detection module for detecting an abnormal situation of a device is provided, and further, the abnormality detection module is provided.
(F) An AI platform including a learning module that performs machine learning other than the above (a) to (e) and / or uses the information and results of at least one of the above (a) to (e). Have and
It is characterized by performing learning control using a model trained by the AI platform.
 本発明の実施形態によれば、調理環境等、現場から確実に検出できる情報に基づき、環境変化にも対応可能な学習制御システムを提供することができる。これにより、調理者のスキルの違いに応じて、料理の品質に差が生じていが、かかる学習制御システムを用いることにより、調理者による料理品質のばらつきを解消することができる。また、調理方法やレシピに関する情報を全ての店舗で共有することにより、メニューの共有化と料理品質の均一化が図れる。 According to the embodiment of the present invention, it is possible to provide a learning control system that can respond to changes in the environment based on information that can be reliably detected from the site such as the cooking environment. As a result, there is a difference in the quality of the dish according to the difference in the skill of the cook, but by using such a learning control system, it is possible to eliminate the variation in the quality of the dish by the cook. In addition, by sharing information on cooking methods and recipes at all stores, menus can be shared and cooking quality can be made uniform.
学習制御システムのブロック図である。It is a block diagram of a learning control system. ステーキの画像認識の説明図である。It is explanatory drawing of image recognition of steak. ステーキの調理経過の説明図である。It is explanatory drawing of the cooking process of steak. ステーキの色成分ごとの画像認識結果の説明図である。It is explanatory drawing of the image recognition result for each color component of steak. ステーキの表面の焼き色について時系列画像分析の説明図である。It is explanatory drawing of time series image analysis about the grilling color of the surface of a steak. ステーキの赤身と脂質とのバランスを分析すると共に、体積等分にカットする位置を推定する画像認識について説明図である。It is explanatory drawing about image recognition which analyzes the balance between lean meat of steak and fat, and estimates the position to cut into equal parts by volume. サーモグラフィーカメラの画像を用いた深層学習の様子についての説明図である。It is explanatory drawing of the state of deep learning using the image of a thermography camera. レタスのラベリングの説明図である。It is explanatory drawing of lettuce labeling. レタスの画像認識のトレーニング結果の説明図である。It is explanatory drawing of the training result of the image recognition of lettuce. レタスの画像認識の出力画像の説明図である。It is explanatory drawing of the output image of lettuce image recognition. レタスをカットするラインの説明図である。It is explanatory drawing of the line which cuts lettuce. レタスのラベリングの説明図である。It is explanatory drawing of lettuce labeling. レタスの画像認識のトレーニング結果の説明図である。It is explanatory drawing of the training result of the image recognition of lettuce. レタスの画像認識の正解率の説明図である。It is explanatory drawing of the correct answer rate of lettuce image recognition. レタスの画像認識の出力映像の説明図である。It is explanatory drawing of the output video of lettuce image recognition. キャベツ又はレタスの芯を切り取る裁断線の決定方法の説明図である。It is explanatory drawing of the method of determining the cutting line which cuts the core of cabbage or lettuce. 図17は寿司ネタの認識結果の説明図である。FIG. 17 is an explanatory diagram of the recognition result of sushi material.
 以下、図面を参照して本発明の実施形態に係る学習制御システム説明する。但し、以下に示す実施形態は本発明の技術思想を具体化するための学習制御システムを例示するものであって、本発明をこれらに特定するものではなく、請求の範囲に含まれるその他の実施形態のものにも等しく適用し得るものである。 Hereinafter, the learning control system according to the embodiment of the present invention will be described with reference to the drawings. However, the embodiments shown below exemplify a learning control system for embodying the technical idea of the present invention, and do not specify the present invention as these, but other embodiments included in the claims. It is equally applicable to the form.
[第1実施形態]
 本発明の第1実施形態に係る学習制御システムについて、図1~図17を参照して説明する。
[First Embodiment]
The learning control system according to the first embodiment of the present invention will be described with reference to FIGS. 1 to 17.
 図1は、本実施形態の学習制御システムのブロック図である。AIプラットフォーム10は、情報ネットワーク20を介して、料理ロボット21、業務自動化AIロボット22、及び、AIレストラン23と接続され、相互に情報通信を行っている。 FIG. 1 is a block diagram of the learning control system of the present embodiment. The AI platform 10 is connected to the cooking robot 21, the business automation AI robot 22, and the AI restaurant 23 via the information network 20, and communicates with each other.
 調理ロボット21は、キッチンにおける一連の調理工程を自動化するものであり、店舗、流通小売りが適用対象となる。業務自動化AIロボット22は、調理に付随する単純作業を自動化するものであり、例えば店舗外の工場等での活用が想定される。AIレストランとは、機械学習により訓練されたAIによって、管理が自動的に行われるレストランであり、自動化のレベルは自由に選べるが、自動化を進めた場合には、完全自動レストランを達成することが可能である。実際には営業時間中は完全自動化し、メンテナンス等の一部の調整ないし作業を手動で行う省力化レストランとして実施されることが多い。 The cooking robot 21 automates a series of cooking processes in the kitchen, and is applicable to stores and retail stores. Business automation The AI robot 22 automates simple tasks associated with cooking, and is expected to be used in factories outside stores, for example. An AI restaurant is a restaurant that is automatically managed by AI trained by machine learning, and the level of automation can be freely selected, but if automation is advanced, it is possible to achieve a fully automated restaurant. It is possible. Actually, it is often implemented as a labor-saving restaurant that is fully automated during business hours and manually performs some adjustments or work such as maintenance.
 AIプラットフォーム10は、調理ロボット21、業務自動化AIロボット22及びAIレストラン23のAIの訓練を行ったり、ビッグデータを収集しAIの訓練を行ったり、AIの出力を含む各種分析データを用いて、店舗、工場及びレストランの管理及びマーケティング等のための情報を提供することができる。例えば、AIプラットフォーム10においては、ビックデータに基づき深層学習を行い、訓練済みのモデル(学習済みモデル)を調理ロボット21、業務自動化AIロボット22及びAIレストラン23等に提供することにより、AIレストラン23等の店舗等においては学習済みモデルを用いた画像認識、自動調理、調理アシスト等が可能である。 The AI platform 10 uses various analysis data including AI training for cooking robot 21, business automation AI robot 22 and AI restaurant 23, collecting big data and training AI, and using various analysis data including AI output. It can provide information for the management and marketing of stores, factories and restaurants. For example, in the AI platform 10, deep learning is performed based on big data, and a trained model (trained model) is provided to the cooking robot 21, the business automation AI robot 22, the AI restaurant 23, and the like, thereby providing the AI restaurant 23. Image recognition, automatic cooking, cooking assistance, etc. using a trained model are possible in stores such as.
 例えば各店舗やレストランで収集された顧客の画像データを用いて、店舗内の混雑状況や店舗運営・マーケティング等に利用するために、これらの画像データを含む各種データをAIプラットフォームにおいて分析する。 For example, using the image data of customers collected at each store or restaurant, various data including these image data are analyzed on the AI platform in order to use it for the congestion situation in the store, store operation, marketing, etc.
 AIプラットフォームで分析されるデータとしては、例えば、店舗名、店舗席数、来客人数、来客人数の時間分布等を含む店舗情報、商品名、価格、原価、商品毎の販売数量、商品毎の提供時間、食材毎の賞味期限、仕入情報等を含む商品情報等、商品や食品またこれらの調理に関する画像情報及び音声情報等が挙げられる。また、AI顧客分析には、店舗の画像データを用いて顧客の年齢、性別等を含む顧客属性、来客時間分布、滞在時間等を画像認識し、自動的に分析・可視化される。ただし、顧客分析にあたり個人が特定されるような分析は行わない。これらの情報を含む膨大なデータは、AIプラットフォームにおいて、各種AIの訓練にも用いられる。 The data analyzed by the AI platform includes, for example, store information including store name, number of store seats, number of visitors, time distribution of number of visitors, product name, price, cost, sales quantity for each product, provision for each product. Examples include product information including time, expiration date for each ingredient, purchase information, etc., image information and audio information related to the product and food, and their cooking. Further, in AI customer analysis, customer attributes including customer age, gender, etc., visitor time distribution, staying time, etc. are image-recognized using image data of the store, and are automatically analyzed and visualized. However, in customer analysis, we do not perform analysis that identifies individuals. The vast amount of data containing this information is also used for training various AIs on the AI platform.
 AIプラットフォーム10には、AI需給予測部11、AI自動フードロス予測部12、AI顧客分析部13、AI商品分析部14、AIスタッフ作業分析部15、及び、AI品質管理部16等が備えられている。AIプラットフォーム10は例えば、後述する(a)~(f)のモジュールにより構成されており、これら各分析部11~16における分析には、例えば(a)~(f)のモジュールが用いられる。 The AI platform 10 includes an AI supply / demand forecasting unit 11, an AI automatic food loss forecasting unit 12, an AI customer analysis unit 13, an AI product analysis unit 14, an AI staff work analysis unit 15, an AI quality control unit 16, and the like. There is. The AI platform 10 is composed of, for example, the modules (a) to (f) described later, and for example, the modules (a) to (f) are used for the analysis in each of the analysis units 11 to 16.
 AI需給予測部11では、過去実績、天気、イベント、プロモーション、他店舗情報等の情報などからAIが機械学習によって来客数、販売数などを予測し、その結果を可視化することができる。 In the AI supply / demand forecasting unit 11, AI can predict the number of visitors, the number of sales, etc. by machine learning from information such as past results, weather, events, promotions, other store information, etc., and can visualize the results.
 AI自動フードロス予測部12では、需要予測に基づき、必要原材料とスタッフ数を予測算出し、店長のシフト作成作業、仕入れ作業を軽減、人材過剰雇用回避及びフードロス削減等を実現する。 The AI automatic food loss forecasting unit 12 predicts and calculates the required raw materials and the number of staff based on the demand forecast, reduces the shift creation work and purchasing work of the store manager, avoids overemployment of human resources, and reduces food loss.
 AI顧客分析部13では、顧客の属性をAI機能付きカメラが自動認識し、顧客の性別、年齢層、来客時間帯、滞在時間等の分析・可視化し、これらの分析情報はマーケティングに利用される。 In the AI customer analysis unit 13, the camera with AI function automatically recognizes the customer's attributes, analyzes and visualizes the customer's gender, age group, visitor time zone, staying time, etc., and these analysis information is used for marketing. ..
 AI商品分析部14では、商品購入における顧客属性、地域などから商品トレンドを分析・可視化し、これらの分析情報は商品開発に利用される。 The AI product analysis unit 14 analyzes and visualizes product trends based on customer attributes, regions, etc. in product purchases, and these analysis information is used for product development.
 AIスタッフ作業分析部15では、店舗スタッフの作業を注文メニューと紐づけて作業時間の分析・可視化を行い、分析情報を利用して店舗回転率を向上することができる。 The AI staff work analysis unit 15 can analyze and visualize the work time by associating the work of the store staff with the order menu, and can improve the store turnover rate by using the analysis information.
 AI品質管理部16では、調理の過程を分析することにより、美味しさを追求すると共に、調理者・店舗に依存しない味の均一化を図り、調理手順を最適化する。さらに、店舗厨房のHACCP(Hazard Analysis and Critical Control Point: 食品などの安全性を確保するための工程管理)関連情報を自動的に管理し、調理時の安全性確保作業を軽減する。特に限定されるものではないが、AI品質管理部16での調理過程の分析には、例えば深層学習が用いられ、商品や食品またこれらの調理に関する画像情報及び音声情報等に基づく訓練が行われる。 The AI quality control department 16 analyzes the cooking process to pursue deliciousness, to make the taste uniform regardless of the cook / store, and to optimize the cooking procedure. In addition, HACCP (Hazard Analysis and Critical Control Point: process control to ensure the safety of food, etc.) related information in the store kitchen is automatically managed to reduce the work of ensuring safety during cooking. Although not particularly limited, deep learning is used, for example, for the analysis of the cooking process in the AI quality control unit 16, and training based on image information, audio information, etc. regarding products and foods and their cooking is performed. ..
 また、本実施形態の学習制御システムは、次の工程を含み、AIプラットフォームによる機械学習が可能である。
(1)店舗予約(自動レストラン管理、遠隔関予約、席案内)
(2)注文・決済(カスタマイズ注文と遠隔決済;お勧めメニュー提案、栄養管理、食物アレルギー、味付けの好み、具材・トッピング)
(3)食材整理(セントラルキッチン、仕込み、カット)
(4)食材供給(食材に応じた保存)
(5)調理(レシピ、加熱、混合)
(6)盛付(盛付の自動化)
(7)提供、配膳(自動搬送)
(8)下膳・洗浄(画像認識による食器の取り回し)
(9)下準備(需要予測による仕入れ、材料補充、管理・分析)
Further, the learning control system of the present embodiment includes the following steps, and machine learning by the AI platform is possible.
(1) Store reservation (automatic restaurant management, remote reservation, seat information)
(2) Order / payment (customized order and remote payment; recommended menu proposal, nutritional management, food allergies, seasoning preferences, ingredients / toppings)
(3) Ingredients arrangement (central kitchen, preparation, cutting)
(4) Supply of ingredients (preservation according to ingredients)
(5) Cooking (recipe, heating, mixing)
(6) Filling (automation of filling)
(7) Providing and serving (automatic transportation)
(8) Lower set / washing (handling of tableware by image recognition)
(9) Preparation (purchasing based on demand forecast, material replenishment, management / analysis)
 AIプラットフォームの具体例としては、次の(イ)~(へ)を含むような構成例を構築できる。
(イ)需要予測(調理したての商品を需要に見合っただけ製造する。)
(ロ)温度、火加減の調整
(ハ)焼き加減、焼き時間(例えばステーキを裏返す時期を画像認識、サーモグラフィーでもよい。)
(ニ)茹で調理や揚げ勝利の火加減や時間の調整(例えば画像及び音声の認識により、茹で状態や揚げ状態を判断する。)
(ホ)食材認識(例えば画像及び音声からの認識により、食材の種類と状態を認識し、適切な調理法を決定する。)
(ヘ)調理具合の認識(例えば画像及び音声の認識により、火加減、加熱時間等を管理する。)
As a specific example of the AI platform, a configuration example including the following (a) to (f) can be constructed.
(B) Demand forecast (manufacture freshly cooked products to meet demand)
(B) Adjusting the temperature and heat (c) Baking, baking time (for example, image recognition or thermography may be used to determine when to turn the steak over).
(D) Boiled cooking and frying Victory adjustment and time adjustment (for example, the boiled state and fried state are judged by recognition of images and sounds).
(E) Ingredient recognition (for example, by recognizing from images and sounds, the type and state of ingredients are recognized and an appropriate cooking method is determined.)
(F) Recognition of cooking condition (for example, by recognizing images and sounds, the degree of heating, heating time, etc. are managed).
 また、AIプラットフォームの構成例の一例としては、次の(a)~(f)のような学習モジュールを備えるものが挙げられ、各モジュールにおいて、例えばビッグデータを用いた深層学習による訓練が行われる。
(a)需要予測モジュール:売り上げの予測をするために、特に限定されるものではないが、例えば教師無し学習が採用される。
(b)画像認識モジュール:物体を認識するために、特に限定されるものではないが、例えば教師あり学習を行う。セグメンテーションやピクセルによる認識を行う。四角や丸だけでない、形が定まらない場合には、セグメンテーションが有効である。
(c)音声認識モジュール:例えば調理状況などを認識する。例えば、深層学習のネットワークとしては例えばLSTM(Long short-term memory)等が用いられ、周期性やピーク等の特徴量が分析される。
(d)推奨モジュール:過去のデータをもとにお気に入りを推奨、人の特性を見つけて傾向を表すために、特に限定されるものではないが、例えば教師無し学習が採用される。リコメンド機能を有する。
(e)異常検知モジュール:機器などの異常事態を通知する、天婦羅の油の異常を±10度でアラートを出す。工場のラインでは、1万個中1個の不具合でもアラートを出す。安全管理のために、厳しい管理が必要となる。特に限定されるものではないが、例えば教師無し学習を行う。閾値を外した時だけにアラートを行う。
(f)学習モジュール(三種類の学習):上記(a)~(e)以外の機械学習、あるいは、上記(a)~(e)の情報や結果を利用したり、あるいは、これらの情報や結果を組み合わせた機械学習を行う。機械学習には、教師あり学習、教師無し学習、強化学習の三種類がある。教師あり学習には例えば回帰が含まれ、教師無し学習には例えばLSTMが含まれ、強化学習にはセグメンテーション(例えば寿司ネタの認識)が含まれる。強化学習はロボットの学習にも適している。強化学習は、限られたデータをもとに学習するものであり、代表的な手法としてTD学習やQ学習等が挙げられる。なお、いずれのモジュールにおいても深層学習を採用することが可能である。
Further, as an example of the configuration example of the AI platform, those provided with the following learning modules (a) to (f) can be mentioned, and in each module, for example, training by deep learning using big data is performed. ..
(A) Demand forecast module: In order to forecast sales, for example, unsupervised learning is adopted, although it is not particularly limited.
(B) Image recognition module: In order to recognize an object, for example, supervised learning is performed, although it is not particularly limited. Performs segmentation and pixel recognition. Segmentation is effective when the shape is not fixed, not just squares and circles.
(C) Voice recognition module: For example, it recognizes a cooking situation. For example, as a deep learning network, for example, LSTM (Long short-term memory) or the like is used, and features such as periodicity and peaks are analyzed.
(D) Recommended module: A favorite is recommended based on past data, and unsupervised learning is adopted, for example, in order to find a person's characteristics and show a tendency, although not particularly limited. Has a recommendation function.
(E) Abnormality detection module: Anomaly detection module for tempura oil is alerted at ± 10 degrees to notify the abnormal situation of equipment. On the factory line, an alert is issued even if one out of 10,000 defects occurs. Strict management is required for safety management. Although not particularly limited, for example, unsupervised learning is performed. Alert only when the threshold is removed.
(F) Learning module (three types of learning): Machine learning other than the above (a) to (e), or using the information and results of the above (a) to (e), or these information Perform machine learning that combines the results. There are three types of machine learning: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning includes, for example, regression, unsupervised learning includes, for example, LSTMs, and reinforcement learning includes segmentation (eg, recognition of sushi material). Reinforcement learning is also suitable for robot learning. Reinforcement learning is learning based on limited data, and typical methods include TD learning and Q-learning. It is possible to adopt deep learning in any module.
 以下、AIプラットフォーム10の具体的な構成例を説明する。例えば、ステーキの場合には、AI品質管理16において、油脂と赤身の状態、大きさ、体積、厚みを認識し、火加減と焼時間、裏返すタイミングを教師あり学習により最適化する。 Hereinafter, a specific configuration example of the AI platform 10 will be described. For example, in the case of steak, in AI quality control 16, the state, size, volume, and thickness of fats and oils and lean meat are recognized, and the degree of heating, baking time, and timing of turning over are optimized by supervised learning.
 また、例えば、揚げ物の場合には、油の音等から教師あり学習により、揚げ具合を最適化することができる。さらに、調理時間(揚げ時間、焼き時間、茹で時間等)だけでなく、メンテナンスの時期等も報知可能である。また、需要予想により需要に応じた提供タイミングまでも加味した調理が可能である。センサとしては、画像センサ、音声センサ、ドキュメント等をもとに学習、認識する。さらに、温度センサ、重量センサ等を設けることも可能である。例えば、スープバーの残りの量や品質は、画像、温度、重量等から推定できる。 Also, for example, in the case of fried food, the fried condition can be optimized by supervised learning from the sound of oil and the like. Furthermore, not only the cooking time (fried time, baking time, boiling time, etc.) but also the maintenance time can be notified. In addition, it is possible to cook by taking into account the timing of provision according to the demand according to the demand forecast. As the sensor, learning and recognition are performed based on an image sensor, a voice sensor, a document, and the like. Further, it is also possible to provide a temperature sensor, a weight sensor and the like. For example, the remaining quantity and quality of the soup bar can be estimated from the image, temperature, weight and the like.
 AIプラットフォームの適用例としては、フランチャイズチェーンの場合、本部と店舗の関係、すなわち、本部でデータ集計・学習したモデルを各店舗で利用することなども可能である。 As an application example of the AI platform, in the case of a franchise chain, it is possible to use the relationship between the headquarters and the stores, that is, the model in which the data is aggregated and learned at the headquarters at each store.
 本実施形態において、顧客に提供する商品としては、特に限定されるものではないが、例えば、イートインメニュー、テイクアウトメニュー、料理、ソフトドリンク、アルコールドリンク、ホットドリンク、及び、コールドドリンク等、様々な飲食メニューが含まれる。また、店舗としては、特に限定されるものではないが、例えば、飲食店、移動店舗、デリバリー店舗、仮設店舗、イートインコーナー、フードコート、宿泊施設、学校、病院、食堂、スーパーマーケット、デパート、量販店、商店、及び、コンビニエンスストア等、様々な形態の店舗が含まれる。 In the present embodiment, the products provided to the customer are not particularly limited, but various foods and drinks such as, for example, eat-in menu, take-out menu, food, soft drink, alcoholic drink, hot drink, and cold drink. Includes menu. The stores are not particularly limited, but for example, restaurants, mobile stores, delivery stores, temporary stores, eat-in corners, food courts, accommodation facilities, schools, hospitals, cafeterias, supermarkets, department stores, mass retailers. , Stores, and convenience stores, etc., are included.
 深層学習については、例えばCNNを用い、特に限定されるものではないが、例えば前処理としてラベリング及び2D変換による教示データの水増し、RGB系列の信号、HSV系列の信号、赤外線画像の信号等の信号処理を行うことにより学習効率を高め、また、精度を向上することが可能である。以下、機械学習の具体例として、(実施例1)~(実施例9)について説明する。 For deep learning, for example, CNN is used, and the signal is not particularly limited, but for example, as preprocessing, the teaching data is inflated by labeling and 2D conversion, and signals such as RGB series signals, HSV series signals, and infrared image signals are used. By performing the processing, it is possible to improve the learning efficiency and the accuracy. Hereinafter, (Example 1) to (Example 9) will be described as specific examples of machine learning.
(実施例1)
 本実施例では、ステーキの調理をAI品質管理部16が最適化する工程を説明する。ステーキ用の肉の赤身と脂肪分の割合、そのカロリー等を画像から判断し、また、カメラから入力された画像データに対する画像認識により、色合いの違いから、肉の状態、焼き加減を調整する。このような画像認識によって、店による味の違い、調理者による焼き具合のばらつきをなくし、標準化が可能となる。材料について、過去の調理データに基づき最適に、かつ、どの店舗においても最適な調理を行う。また、生肉の状態を判定して、最適に調理を行う。
(Example 1)
In this embodiment, the process of optimizing the cooking of steak by the AI quality control unit 16 will be described. The ratio of lean meat and fat content for steak, its calories, etc. are judged from the image, and the condition of the meat and the degree of baking are adjusted from the difference in color by image recognition for the image data input from the camera. By such image recognition, it is possible to eliminate the difference in taste depending on the store and the variation in the baking condition depending on the cook, and to standardize. Optimal cooking of ingredients based on past cooking data and at any store. In addition, the state of raw meat is judged and optimal cooking is performed.
(1)データセットの生成
 カメラ2台によってステーキ調理のプロセスを撮影する。カメラとしては、例えばGoProを用いることができる。取得された画像に対して、動作の種類(裏返しflip、投入put_in、動作無none)、開始時点、終了時点ラベリングし、ラベリング済みの動画から複数の静止画像に変換する。特に限定されるものではないが、画像サイズは256×455pixelとすることができる。次に、変換された画像ピクセルの平均値と標準偏差を計算する。次に、画像をトレーニング用リストと検証リストに分ける。特に限定されるものではないが、例えばトレーニング用リストの割合を2/3、検証リストの割合を1/3とする。
(1) Data set generation The steak cooking process is photographed with two cameras. As a camera, for example, GoPro can be used. The acquired image is labeled with the type of operation (inside-out flip, input put_in, no operation none), start point, and end point, and the labeled moving image is converted into multiple still images. Although not particularly limited, the image size can be 256 × 455 pixels. Next, the mean and standard deviation of the converted image pixels are calculated. Next, divide the image into a training list and a validation list. Although not particularly limited, for example, the ratio of the training list is 2/3 and the ratio of the verification list is 1/3.
(2)訓練
 フレームワークとして、PyTorchとPyTorchVisionを用い、ネットワーク(以下、「中核ネットワーク」ともいう。)としてはアクション認識が可能である3D ResNet-18(18layers)を用いた。損失関数としては、  Cross_Entropy_Loss
(2) PyTorch and PyTorchVision were used as the training framework, and 3D ResNet-18 (18layers) capable of action recognition was used as the network (hereinafter, also referred to as "core network"). The loss function is Cross_Entropy_Loss
Figure JPOXMLDOC01-appb-M000001
を用いる。
Figure JPOXMLDOC01-appb-M000001
Is used.
 画像に対して、前処理として、
・ランダムに拡縮率(0.8~1.3)、
・ランダムに五ヶ所で(中心、左上、右上、左下、右下)画像の一部を切り出す、
・ランダムに回転する、
・ランダムに水平反転、又は、
・平均値と標準偏差で正規化する、
等の処理を行い、画像数を増加させた。また、時系列画像を生成するために、ランダムに同じくラベリングされた画像から連続的な16枚を切り出した。
As a pre-processing for the image
・ Random scaling ratio (0.8 ~ 1.3),
・ Randomly cut out a part of the image at five places (center, upper left, upper right, lower left, lower right).
・ Randomly rotate,
・ Randomly horizontal inversion or
・ Normalize with mean and standard deviation,
The number of images was increased by performing processing such as. Also, in order to generate time-series images, 16 consecutive images were cut out from the similarly labeled images at random.
(3)検証
検証用データは、中心から切り出す方法、平均値と標準偏差で正規化する方法等を用いて、画像数を増加させた。また、時系列画像を生成する手法としては、時間軸上の順通りに連続的な16枚画像を切り出し、16枚未満の場合、最後の画像をコピーし16枚まで足した。
(3) Verification The number of images of the verification data was increased by using a method of cutting out from the center, a method of normalizing with the average value and the standard deviation, and the like. In addition, as a method for generating time-series images, 16 continuous images were cut out in order on the time axis, and if the number was less than 16, the last image was copied and added up to 16.
(4)結果
 学習の結果を図2~図4により説明する。図2は肉の投入時期、肉を裏返す時期の判定結果を示すものであり、図2Aは投入時期(Put_in、0.927)を、図2Bは裏返し時期(flip、0.907)を正しく判定されている。判定の精度は、97.78%であった。
(4) Results The learning results will be described with reference to FIGS. 2 to 4. FIG. 2 shows the determination results of the meat feeding time and the meat turning over time, FIG. 2A correctly determines the feeding time (Put_in, 0.927), and FIG. 2B correctly determines the turning over time (flip, 0.907). The accuracy of the judgment was 97.78%.
 図3は動作数を分析した結果であり、投入時期及び裏返し時期が、適切に判別されている。また、図4は肉の画像のピクセルを分析した結果であり、グレースケール、赤色、緑色、青色の画像データについて分析した。図4Aはカラー画像を示し、図4Bはグレースケールの画像を示す。また、図4Cはグレースケールによる画像の分析結果、図4Dは赤色成分による画像の分析結果、図4Eは緑色成分による画像の分析結果、図4Fは青色成分による画像の分析結果を示す。各画像及びこれらの各分析結果等をもとに訓練されたAIによって、肉の状態を画像認識することができる。 FIG. 3 is the result of analyzing the number of operations, and the timing of putting in and the timing of turning over are appropriately determined. Further, FIG. 4 shows the results of analyzing the pixels of the meat image, and the grayscale, red, green, and blue image data were analyzed. FIG. 4A shows a color image and FIG. 4B shows a grayscale image. Further, FIG. 4C shows the analysis result of the image by the gray scale, FIG. 4D shows the analysis result of the image by the red component, FIG. 4E shows the analysis result of the image by the green component, and FIG. 4F shows the analysis result of the image by the blue component. The state of meat can be image-recognized by AI trained based on each image and the results of each of these analyzes.
 図3の動作数と図4の肉の画像とにより、肉の投入時期及び肉を裏返す時期が学習され、最適な肉の投入時期、肉を裏返す時期が出力される。これにより、機械学習によって、焼き加減を見分けて適切な投入時期、裏返し時期等の調理時間を、焼き加減(レア、ミディアム、ウェルダン等)ごとの調理時間を適切に決定できる。また、顧客側での撮影したデータを用いて学習精度を向上させることも可能である。 From the number of movements in FIG. 3 and the image of the meat in FIG. 4, the time when the meat is put in and the time when the meat is turned over are learned, and the optimum time when the meat is put in and the time when the meat is turned over are output. As a result, by machine learning, it is possible to discriminate the degree of baking and appropriately determine the cooking time such as the appropriate charging time and the turning time, and the cooking time for each degree of baking (rare, medium, well-done, etc.). It is also possible to improve the learning accuracy by using the data taken by the customer.
 また図4の画像認識により、肉の状態を判別することもでき、この判別結果によって焼き具合を調整することができる。例えば、赤身と脂肪のバランス、脂肪の分布の状態、赤身の色具合、脂肪の色具合、肉の体積、肉の厚み、肉の形状、脂肪の分布具合等により、肉の状態の判定、肉のカロリーの算出、焼き具合の調整、焼き始める部分、良く焼くべき部分等を把握することができるので、肉の状態に合わせて最適な焼き具合のステーキを提供することができる。 Further, the state of the meat can be discriminated by the image recognition of FIG. 4, and the baking condition can be adjusted based on the discriminating result. For example, the condition of meat can be determined based on the balance between lean and fat, the distribution of fat, the color of lean, the color of fat, the volume of meat, the thickness of meat, the shape of meat, the distribution of fat, etc. Since it is possible to calculate the calories of the meat, adjust the baking condition, grasp the part to start baking, the part to be baked well, etc., it is possible to provide the steak with the optimum baking condition according to the condition of the meat.
 ステーキを調理者が料理する場合には、この調理者に対しAIから火加減、焼時間、肉を裏返す時期等を情報提供可能である。また、調理ロボットによって調理する場合には、AIからの情報に基づいて調理ロボットが最適な焼き具合でステーキを調理することができる。これによって、店舗や調理者に依存することなく、いつでも、どこでも均一かつ高品質なステーキを提供することが可能である。 When a steak is cooked by a cook, it is possible to provide the cook with information such as the degree of heating, the baking time, and the time to turn the meat over. Further, when cooking by the cooking robot, the cooking robot can cook the steak in the optimum baking condition based on the information from AI. This makes it possible to provide uniform and high-quality steaks anytime, anywhere, without depending on the store or the cook.
(5)明るさによる焼き具合の判定
 図5は、ステーキの表面の焼き色について時系列画像分析の説明図である。調理画像の中からステーキ肉(牛肉)部分を上述と同様の深層学習により物体認識で抽出し、牛肉表面の焼き色について時系列画像分析している。図5Aが牛肉を焼はじめた画像であり、図5A、図5B、図5C、図5Dの順で牛肉が焼き進められ、図5Dが焼き上がりの画像である。図5Eは、牛肉の焼き調理が進んで、焼きあがるまでの明るさの変化を示したグラフである。図5Eの縦軸はHSVモデルによる色表現における明るさ(Brightness)であり、横軸は調理時間である。また、グラフ中に横軸に平行にひかれた直線(例えばBrightness=196の直線)は、焼き上がりを分析する時の基準の一つとして用いられる判定ラインである。上記の深層学習に基づき、焼き色の時間推移を定量的に観測することで、習熟者と見習いの調理工程の違いを分析し、見習いの料理者に対して、習熟者と同等の調理工程を教示することが可能となる。図5Eにおいて、微小な増減の変化を繰り返しながら、徐々にBrightnessが減少しているグラフは、ある調理者によるステーキ用の牛肉を加熱調理している様子を示している。肉をひっくり返した際には、急激なBrightnessの低減が観測され、肉の表面のBrightnessの変化が観測される。このBrightnessの変化の様子について、習熟者の調理工程の特徴を深層学習によって分析することにより、最適な肉の焼時間や肉をひっくり返すタイミングを調理者に教示することが可能となる。
(5) Determining the degree of grilling based on the brightness FIG. 5 is an explanatory diagram of time-series image analysis of the grilled color of the surface of the steak. The steak meat (beef) portion is extracted from the cooked image by object recognition by the same deep learning as described above, and the grilled color of the beef surface is analyzed by time-series image. FIG. 5A is an image in which beef has begun to be grilled, and FIG. 5D is an image in which beef is grilled in the order of FIGS. 5A, 5B, 5C, and 5D, and FIG. 5D is a baked image. FIG. 5E is a graph showing the change in brightness of beef as it is cooked and until it is cooked. The vertical axis of FIG. 5E is the brightness in the color expression by the HSV model, and the horizontal axis is the cooking time. A straight line drawn parallel to the horizontal axis in the graph (for example, a straight line with Brightness = 196) is a judgment line used as one of the criteria when analyzing the baking. Based on the above deep learning, by quantitatively observing the time transition of the grilled color, we analyze the difference between the cooking process of the apprentice and the apprentice, and give the apprentice cook the same cooking process as the apprentice. It becomes possible to teach. In FIG. 5E, the graph in which Brightness gradually decreases while repeating minute changes in increase / decrease shows a state in which beef for steak is cooked by a certain cook. When the meat is turned over, a sharp decrease in Brightness is observed, and a change in Brightness on the surface of the meat is observed. By analyzing the characteristics of the cooking process of a proficient person by deep learning about the state of this change in Brightness, it is possible to teach the cook the optimum meat baking time and the timing of turning over the meat.
(6)赤身と脂質とのバランスの分析
 図6は、ステーキの赤身と脂質とのバランスを分析すると共に、体積等分にカットする位置を推定する画像認識について説明図である。図6において、グレー色がステーキ用の牛肉赤身部分、黒色が脂質部分、縦線が体積等分となる切り分け位置を示すカットラインである。また、本実施例の画像認識によれば、上述と同様の深層学習に基づき、赤身と脂質とのバランスの分析、体積等分にカットする位置の推定に加え、重さ(Weight)、体積(Meat Volume)、面積(Meat Area)、脂質の割合(Fat Rate)、及び、カロリー(Calorie)を認識することが可能である。カメラの撮影方向によって、体積等分となる切り分けのカットラインの表示が異なるため、実用上、観測対象からの垂線に対して20度程度の傾きの範囲内となるようにカメラの光軸を設置することが望ましい。
(6) Analysis of balance between lean meat and fat FIG. 6 is an explanatory diagram of image recognition for analyzing the balance between lean meat and fat of steak and estimating the position of cutting into equal parts by volume. In FIG. 6, the gray color is the lean beef portion for steak, the black color is the lipid portion, and the vertical line is the cut line indicating the cutting position where the volume is equally divided. In addition, according to the image recognition of this example, based on the same deep learning as described above, in addition to the analysis of the balance between lean meat and fat and the estimation of the position to be cut into equal parts by volume, the weight and volume ( It is possible to recognize Meat Volume, Area, Fat Rate, and Calorie. Since the display of the cut line that divides the volume into equal parts differs depending on the shooting direction of the camera, practically, the optical axis of the camera is installed so that it is within the range of inclination of about 20 degrees with respect to the perpendicular line from the observation target. It is desirable to do.
(7)赤外線サーモグラフィーカメラ画像の解析
 図7は、サーモグラフィーカメラの画像を用いて上述と同様の深層学習を行った様子についての説明図である。カメラとしてサーモグラフィーカメラを用いた場合には、表面の色変化だけではなく、温度変化を捉え、分析することが可能となるので、より適切に熟練者と同等な調理手法を教示し易い。図7Aは平均温度が53.85℃であり、温度のばらつき値(標準偏差STD)が2.56℃である。図7Bは平均温度が52.49℃であり、STDが20.65℃である。STDが大きいと、焼むらが大きいことの指標となる。例えば、フライパンの中心は、フライパンの周囲よりも火力が強い場合には、深層学習システムの出力として、フライパン上での調理対象、ここではステーキ用牛肉を置く位置を調整するような教示内容が出力され、この教示内容に従い、STDを所定の許容値に収まるような調理を実行することができる。
(7) Analysis of Infrared Thermography Camera Image FIG. 7 is an explanatory diagram showing a state in which the same deep learning as described above is performed using the image of the thermography camera. When a thermography camera is used as a camera, it is possible to capture and analyze not only the color change of the surface but also the temperature change, so that it is easier to more appropriately teach a cooking method equivalent to that of a skilled person. In FIG. 7A, the average temperature is 53.85 ° C., and the temperature variation value (standard deviation STD) is 2.56 ° C. FIG. 7B has an average temperature of 52.49 ° C and an STD of 20.65 ° C. When the STD is large, it is an indicator that the unevenness of burning is large. For example, when the heat is stronger than the surroundings of the frying pan, the center of the frying pan outputs the teaching content such as adjusting the cooking target on the frying pan, here the position where the beef for steak is placed, as the output of the deep learning system. Then, according to this teaching content, cooking can be performed so that the STD is within a predetermined allowable value.
 本実施例では、ステーキ用の肉として牛肉を例に挙げて説明したが、本実施例の適用対象は、ステーキ用の肉に限定されるものではなく、例えば肉、魚、野菜等、他の料理や食材にも同様に適用可能である。 In this embodiment, beef has been described as an example of meat for steak, but the application of this example is not limited to meat for steak, and other meat, fish, vegetables, etc., for example. It can be applied to cooking and ingredients as well.
(実施例2)
 本実施例ではAI品質管理部16における調理の過程の分析により、麺類の茹で時間、お湯の温度、火加減をアドバイスするシステムについて説明する。本実施例では麺類を茹でること例示するが、本実施形態はこれに限定されるものでは無く、お湯以外の加熱媒体を用いる場合、例えば油を用いたフライヤー等の様々な加熱媒体を用いる調理器具に対して適用可能である。必要に応じて、AI需要予測部11、AI顧客分析部12等による分析も行われる。
(1)店舗予約については、自動レストラン管理、遠隔関予約、席案内を行う。
(2)注文・決済については、カスタマイズ注文と遠隔決済が可能で、お勧めメニュー提案、栄養管理、食物アレルギー、味付けの好み、具材・トッピングが可能である。需要予測モジュールでは、年月日、天候、イベント等に応じた売り上げの予測を行い、推奨モジュールにおいては、過去の顧客データ等に基づき顧客に対応したお気に入りのメニューを推奨する。
(3)食材整理については、セントラルキッチンにおいて、カット等の仕込みを行う。
(4)食材供給については、食材に応じた保存を行う。
(5)調理(レシピ、加熱、混合)については、画像認識モジュールにて物体を認識し、音声認識モジュールにて調理状況などを認識し、異常検知モジュールにて機器などの異常事態を通知し、強化学習モジュールにて限られたデータをもとに学習する。
(6)盛付については、盛付の自動化を行う。
(7)提供、配膳については、完全自動搬送を行う。店舗の混雑状況については、需要予測モジュールにより予め把握可能である。
(8)下膳・洗浄については、画像認識による食器の取り回しを行う。
(9)下準備については、需要予測による仕入れ、材料補充、管理・分析を行う。需要予測モジュールにて、年月日、天候、イベント等に基づき売り上げの予測を行う。画像認識モジュールでは、食材の種類、形状、大きさ等の物体認識を行う。
(Example 2)
In this embodiment, a system for giving advice on the boiling time of noodles, the temperature of hot water, and the degree of heating will be described by analyzing the cooking process in the AI quality control unit 16. In this embodiment, boiling noodles is exemplified, but the present embodiment is not limited to this, and when a heating medium other than hot water is used, a cooking utensil using various heating media such as a fryer using oil, for example. Applicable to. If necessary, analysis by AI demand forecasting unit 11, AI customer analysis unit 12, and the like is also performed.
(1) For store reservations, automatic restaurant management, remote reservations, and seat guidance will be performed.
(2) For orders and payments, customized orders and remote payments are possible, and recommended menu proposals, nutritional management, food allergies, seasoning preferences, ingredients and toppings are possible. The demand forecast module predicts sales according to the date, weather, events, etc., and the recommended module recommends favorite menus that correspond to customers based on past customer data.
(3) Regarding the arrangement of ingredients, the central kitchen will be prepared with cuts and the like.
(4) Regarding the supply of foodstuffs, store them according to the foodstuffs.
(5) For cooking (recipe, heating, mixing), the image recognition module recognizes the object, the voice recognition module recognizes the cooking status, etc., and the abnormality detection module notifies the abnormal situation of the equipment, etc. Learn based on limited data with the reinforcement learning module.
(6) As for the filling, the filling will be automated.
(7) Fully automatic transportation will be provided and served. The congestion status of stores can be grasped in advance by the demand forecast module.
(8) For tableware and washing, tableware will be handled by image recognition.
(9) Regarding preparations, we will purchase, replenish materials, manage and analyze by demand forecast. The demand forecast module forecasts sales based on the date, weather, events, etc. The image recognition module recognizes objects such as the type, shape, and size of foodstuffs.
(実施例3)
 本実施例では、サラダバーの例を説明するが、本実施形態はサラダバーに限定されるものではなく、スープバー、バイキングスタイル、ビュッフェスタイル等の料理提供にも適用可能である。AIプラットフォームの利用により、料理の残量や料理の状態を適切に判定し、補充や配膳、下膳等を行うことができる。必要に応じて、AI需要予測部11、AIフードロス予測部12、AI顧客分析部13、AI商品分析部14、及びAI品質管理部16等による分析も行われる。
(1)店舗予約については、自動レストラン管理、遠隔関予約、席案内を行う。需要予測モジュールにて売り上げの予測を行う。画像認識モジュールにて店舗内の状況、顧客の人数、状態等の物体認識を行い、店舗内の席の空き状況等を把握する。音声認識モジュールにて、客席の状況だけでなく、調理状況なども認識できる。
(2)注文・決済について、カスタマイズ注文と遠隔決済が可能である。お勧めメニュー提案、栄養管理、食物アレルギー、味付けの好み、具材・トッピングも可能である。需要予測モジュールにて、売り上げの予測も行う。
(3)食材整理については、セントラルキッチンにおいて、カット等の仕込みを行う。画像認識モジュールにて、食材の種類や分量などの物体認識を行うと共に、音声認識モジュールにて、店舗状況、調理状況なども認識することにより、状況に応じて適切な食材整理が可能となる。
(4)食材供給については、食材に応じた保存を行う。この際、画像認識モジュールにて、食材の種類や分量等の物体認識を行うと共に、音声認識モジュールも併用して店舗状況、調理状況などを認識する。
(5)調理(レシピ、加熱、混合)については、画像認識モジュールにて、物体の認識を行い、音声認識モジュールにて、調理状況などを認識し、異常検知モジュールにて機器などの異常事態を通知し、強化学習モジュールにて限られたデータをもとに学習を行う。
(6)提供、配膳については、自動搬送の際に、画像認識モジュールにて、店舗状況などの物体認識を行い、音声認識モジュールにて、店舗状況だけでなく、調理状況なども認識する。
(7)下膳・洗浄については、食器等を自動的に取り回す際に、画像認識モジュールにて食器などの物体認識を行い、音声認識モジュールにて、店舗状況、調理状況などを認識する。
(8)下準備については、需要予測による仕入れ、材料補充、管理・分析を行う。需要予測モジュールにて、年月日、天候、イベント等に基づき売り上げの予測を行い、画像認識モジュールにて、客席の状況などの物体認識を行う。
(Example 3)
In this embodiment, an example of a salad bar will be described, but the present embodiment is not limited to the salad bar, and can be applied to serving dishes such as a soup bar, a buffet style, and a buffet style. By using the AI platform, it is possible to appropriately determine the remaining amount of food and the state of food, and perform replenishment, serving, serving, and the like. If necessary, analysis by AI demand forecasting unit 11, AI food loss forecasting unit 12, AI customer analysis unit 13, AI product analysis unit 14, AI quality control unit 16, and the like is also performed.
(1) For store reservations, automatic restaurant management, remote reservations, and seat guidance will be performed. Forecast sales with the demand forecast module. The image recognition module recognizes objects such as the situation in the store, the number of customers, and the condition, and grasps the availability of seats in the store. With the voice recognition module, not only the situation of the audience seats but also the cooking situation can be recognized.
(2) Customization orders and remote payments are possible for orders and payments. Recommended menu suggestions, nutritional management, food allergies, seasoning preferences, ingredients and toppings are also possible. The demand forecast module also forecasts sales.
(3) Regarding the arrangement of ingredients, the central kitchen will be prepared with cuts and the like. The image recognition module recognizes objects such as the type and quantity of foodstuffs, and the voice recognition module also recognizes store conditions, cooking conditions, etc., enabling appropriate food arrangement according to the situation.
(4) Regarding the supply of foodstuffs, store them according to the foodstuffs. At this time, the image recognition module recognizes objects such as the type and amount of foodstuffs, and also uses the voice recognition module to recognize the store status and cooking status.
(5) For cooking (recipe, heating, mixing), the image recognition module recognizes the object, the voice recognition module recognizes the cooking status, etc., and the abnormality detection module detects abnormal situations such as equipment. Notify and learn based on limited data in the reinforcement learning module.
(6) Regarding provision and serving, the image recognition module recognizes objects such as store status during automatic transportation, and the voice recognition module recognizes not only store status but also cooking status.
(7) Regarding the lower set and washing, when the tableware and the like are automatically handled, the image recognition module recognizes the object such as the tableware, and the voice recognition module recognizes the store situation and the cooking situation.
(8) Regarding preparations, purchase, material replenishment, management and analysis will be carried out based on demand forecasts. The demand forecast module predicts sales based on the date, weather, events, etc., and the image recognition module recognizes objects such as the situation of the audience seats.
(実施例4)
 本実施例では、自動レストランの例について説明する。自動レストランにおいては、必要に応じて、AI需要予測部11、AIフードロス予測部12、AI顧客分析部13、AI商品分析部14、AIスタッフ作業分析部15及びAI品質管理部16等で分析が行われる。
(1)店舗予約については、自動レストラン管理、遠隔関予約、席案内を行う。需要予測モジュールにて、売り上げの予測し、画像認識モジュールにて店舗内の客の人数と状況を物体認識により把握し、空席数を自動的に判断し、音声認識モジュールにて、客席状況だけでなく、調理状況なども認識する。
(2)注文・決済については、カスタマイズ注文と遠隔決済、お勧めメニュー提案、栄養管理、食物アレルギー、味付けの好み、具材・トッピングを行う。需要予測モジュールにて、予め売り上げを把握しておくことができる。さらに、入り口で客を出迎え、席まで案内も自動的に行う。
(3)食材整理については、セントラルキッチンにおいて、カット等の仕込みを行う。画像認識モジュールにて、食材などの物体認識を行い、音声認識モジュールにて、調理状況などを認識する。
(4)食材供給については、食材に応じた保存を行う。この際、画像認識モジュールにて、食材の種類や分量等の物体認識を行うと共に、音声認識モジュールも併用して店舗状況、調理状況などを認識する。
(5)調理(レシピ、加熱、混合)については、画像認識モジュールにて、物体認識を行い、音声認識モジュールにて、調理状況などを認識し、異常検知モジュールにて機器などの異常事態を通知し、強化学習モジュールにて限られたデータをもとに学習を行う。
(6)盛付については、盛付を完全に自動化する。画像認識モジュールにて、皿及び料理等を物体認識し、音声認識モジュールにて、調理状況などを認識し、異常検知モジュールにて、機器などの異常事態を通知し、強化学習モジュールにて限られたデータをもとに学習することができる。
(7)提供、配膳については、自動搬送の際に、画像認識モジュールにて、店舗状況などの物体認識を行い、音声認識モジュールにて、店舗状況だけでなく、調理状況なども認識する。さらに、客席のお冷がないのを検知して、空いたグラスに水を注ぎ足すことも可能である。
(8)下膳・洗浄については、食器等を自動的に取り回しを行う際に、画像認識モジュールにて食器などの物体認識を行い、音声認識モジュールにて、店舗状況、調理状況などを認識する。下膳した食器を自動洗浄ロボットにより、自動的に洗浄する。さらに、客が席を外していること、洗面所に行くために一時的に離席しているか、帰ったかを認識を判定する。帰る客については、出口まで案内し、見送る。
(9)下準備ついては、需要予測による仕入れ、材料補充、管理・分析を行う。需要予測モジュールにて、年月日、天候、イベント等に基づき売り上げの予測を行い、画像認識モジュールにて、客席の状況などの物体認識を行う。
(Example 4)
In this embodiment, an example of an automatic restaurant will be described. In the automatic restaurant, analysis is performed by AI demand forecasting unit 11, AI food loss forecasting unit 12, AI customer analysis unit 13, AI product analysis unit 14, AI staff work analysis unit 15, AI quality control unit 16, etc., as necessary. It will be done.
(1) For store reservations, automatic restaurant management, remote reservations, and seat guidance will be performed. The demand forecast module predicts sales, the image recognition module grasps the number and status of customers in the store by object recognition, the number of vacant seats is automatically determined, and the voice recognition module uses only the seat status. It also recognizes the cooking situation.
(2) For orders and payments, customize orders and remote payments, recommend menu proposals, nutritional management, food allergies, seasoning preferences, ingredients and toppings. Sales can be grasped in advance with the demand forecast module. In addition, guests will be greeted at the entrance and guidance will be provided automatically to the seats.
(3) Regarding the arrangement of ingredients, the central kitchen will be prepared with cuts and the like. The image recognition module recognizes objects such as foodstuffs, and the voice recognition module recognizes cooking conditions.
(4) Regarding the supply of foodstuffs, store them according to the foodstuffs. At this time, the image recognition module recognizes objects such as the type and amount of foodstuffs, and also uses the voice recognition module to recognize the store status and cooking status.
(5) For cooking (recipe, heating, mixing), the image recognition module recognizes the object, the voice recognition module recognizes the cooking status, etc., and the abnormality detection module notifies the abnormal situation of the equipment, etc. Then, learning is performed based on limited data in the reinforcement learning module.
(6) As for the filling, the filling will be completely automated. The image recognition module recognizes objects such as dishes and dishes, the voice recognition module recognizes cooking conditions, etc., the abnormality detection module notifies abnormal situations of equipment, etc., and the reinforcement learning module is limited. You can learn based on the data.
(7) Regarding provision and serving, the image recognition module recognizes objects such as store status during automatic transportation, and the voice recognition module recognizes not only store status but also cooking status. Furthermore, it is possible to detect that the audience seats are not cold and add water to the empty glass.
(8) Regarding the lower set and washing, when the tableware etc. are automatically handled, the image recognition module recognizes the object such as the tableware, and the voice recognition module recognizes the store situation, cooking situation, etc. .. The prepared dishes are automatically washed by the automatic washing robot. Furthermore, it is determined whether the customer is out of the seat, temporarily left to go to the bathroom, or returned. For returning guests, guide them to the exit and see them off.
(9) Regarding preparations, we will purchase, replenish materials, manage and analyze by demand forecast. The demand forecast module predicts sales based on the date, weather, events, etc., and the image recognition module recognizes objects such as the situation of the audience seats.
 本実施形態に係る自動飲食店についてさらに詳しく説明する。本実施形態の自動飲食店は、料理自動提供システムを備えており、顧客の携帯端末等からの注文にしたがって、料理の注文を受け付け、自動的に注文にしたがったレシピと調理方法で料理を調理し、顧客に料理を配膳することができる。また、本実施形態の自動飲食店は、店内の空席状況を自動で把握すること、遠隔からでも顧客による空席の確認ができること、遠隔からでも顧客による席の予約ができること、遠隔からでも顧客によるメニューの注文ができること、顧客への料理の提供が自動化されていること、料金の決済が顧客の端末から可能であること、配膳が自動化されていること、使用後の提供容器の回収が自動化されていること、使用後の提供容器の洗浄が自動化されていること、洗浄後の提供容器の補充が自動化されていること、使用後の調理容器の洗浄が自動化されていること、店内における顧客の席への誘導が自動化されていること、顧客への料理の提供が自動化されていること、顧客からの注文の予測が自動化されていること、食材の仕入れのための発注が自動化されていること、納品された食材が自動的に前記供給器に補充されていること等が可能である。 The automatic restaurant according to this embodiment will be described in more detail. The automatic restaurant of the present embodiment is equipped with an automatic food serving system, accepts food orders according to orders from customers' mobile terminals, etc., and automatically cooks food according to the recipe and cooking method according to the order. And you can serve food to your customers. In addition, the automated restaurant of the present embodiment automatically grasps the vacant seat status in the store, allows the customer to confirm the vacant seat remotely, allows the customer to reserve a seat remotely, and allows the customer to reserve a seat remotely. Orders can be placed, food is served to customers automatically, payment of charges is possible from the customer's terminal, serving is automated, and collection of served containers after use is automated. That, the cleaning of the serving container after use is automated, the replenishment of the serving container after cleaning is automated, the cleaning of the cooking container after use is automated, the customer's seat in the store The guidance to customers is automated, the serving of food to customers is automated, the prediction of orders from customers is automated, and the ordering for purchasing ingredients is automated. It is possible that the delivered ingredients are automatically replenished in the feeder.
 店内の空席状況を自動で把握する手段について一例として撮像装置を用いた例を説明する。店内にカメラ等の撮像装置が設けられており、店内の客席の様子を撮影している。この撮影した画像を画像認識することにより、店内の空席状況を自動的に、かつ、リアルタイムで把握することができる。これにより、店内の空席を自動的に判別し、空席情報を予約システムに反映することができる。この予約システムを利用することにより、顧客はいつでもリアルタイムで店舗の空席を確認し、所望の席を予約することができ、かつ、席の予約と共に、料理の注文も併せて行うことが可能である。また、料理の提供時期を予約時間に合わせて調整することができるため、入店後の適切なタイミングで顧客に対して注文された料理を提供することができる。なお、撮像手段としては、白黒カメラ、カラーカメラ、赤外線カメラ、ビデオカメラ等、適宜の手段を用いることができる。また、動画による画像認識だけでなく、所定間隔で静止画を画像認識することによっても空席を判断することができる。さらに、顧客に提供される料理を画像認識により解析することにより、顧客がどのようなペースで食事を勧めているかを把握することができる。このため、料理自動提供システム1は、顧客の食事のペースに合わせて、料理を提供することができると共に、追加の料理(デザートや飲み物も含む)の注文を、携帯端末を介して勧めることができる。 An example of using an image pickup device will be described as an example of a means for automatically grasping the vacancy status in the store. An image pickup device such as a camera is installed in the store to take pictures of the audience seats in the store. By recognizing the captured image, it is possible to automatically and in real time grasp the vacancy status in the store. As a result, vacant seats in the store can be automatically determined and the vacant seat information can be reflected in the reservation system. By using this reservation system, customers can check the vacant seats of the store in real time at any time and reserve the desired seats, and can also reserve the seats and order the food at the same time. .. In addition, since the food serving time can be adjusted according to the reservation time, it is possible to serve the ordered food to the customer at an appropriate timing after entering the store. As the image pickup means, an appropriate means such as a black-and-white camera, a color camera, an infrared camera, and a video camera can be used. Further, the vacant seat can be determined not only by image recognition by moving images but also by image recognition of still images at predetermined intervals. Furthermore, by analyzing the food provided to the customer by image recognition, it is possible to grasp the pace at which the customer recommends the meal. Therefore, the automatic food serving system 1 can serve food at the pace of the customer's meal, and can recommend the order of additional food (including dessert and drink) via the mobile terminal. can.
 入店した顧客の席への案内を自動化することも可能である。顧客が席を予約している場合には、予約時に発行された予約番号、顧客コード等を示すことにより、例えば、顧客の携帯端末に登録されているバーコード、二次元バーコード等を店舗入り口にある読み取り装置にかざすこと、あるいは、例えば、顧客の携帯端末からの近距離無線通信等の通信によるIDコードの認証を行うこと等により、システム1が顧客を認識する。次に、顧客の携帯端末、または、店舗の表示装置により、顧客の予約した席が表示されるので、顧客は自分が予約した席を認識することができる。また、自動配膳機21を用いて、顧客を予約された席まで案内することもできる。 It is also possible to automate the guidance to the seats of customers who have entered the store. When a customer has reserved a seat, by indicating the reservation number, customer code, etc. issued at the time of reservation, for example, the barcode, two-dimensional bar code, etc. registered in the customer's mobile terminal can be used at the store entrance. The system 1 recognizes the customer by holding it over the reading device in the above, or by authenticating the ID code by communication such as short-range wireless communication from the customer's mobile terminal. Next, since the customer's reserved seat is displayed on the customer's mobile terminal or the display device of the store, the customer can recognize the seat reserved by the customer. In addition, the automatic serving machine 21 can be used to guide the customer to the reserved seat.
 顧客が予約をしてない場合には、顧客は携帯端末により来店したことを店舗の入口で無線通信によりシステム1に通知すること、あるいは、顧客の携帯端末からの近距離無線通信等の通信によりシステム1と通信すること等により、システム1はその顧客のIDを把握し、過去の来店履歴など確認し、顧客の携帯端末、または、店舗の表示装置により、顧客に適宜の空席を提案する。顧客がその席を承認することにより、顧客の席が決定され、携帯端末による案内、店舗内の表示装置による表示、自動配膳機21を用いて顧客を予約された席まで案内すること等の手段により、顧客を席まで案内する。料理の注文は顧客の携帯端末または店舗の専用端末等によって行うことができる。空席が無い場合には、システム1は、待ち時間を顧客の携帯端末に表示すること、あるいは、店舗の表示装置に表示すること等により、顧客に予想される待ち時間、待ち人数等を提示し、顧客が空席待ちをするか否かの回答を求める。顧客が空席待ちを選択した場合には、随時、待ち状況を顧客に報知すると共に、お勧めメニューの紹介、店舗の紹介等の情報提供を行い、空席ができた場合には、空席待ちの順番に従い、その顧客の順番になったら前述と同様に空席に案内する。 If the customer has not made a reservation, the customer notifies the system 1 by wireless communication at the entrance of the store that the customer has visited the store by a mobile terminal, or by communication such as short-range wireless communication from the customer's mobile terminal. By communicating with the system 1, the system 1 grasps the customer's ID, confirms the past visit history, etc., and proposes an appropriate vacant seat to the customer by the customer's mobile terminal or the display device of the store. When the customer approves the seat, the customer's seat is determined, and the customer is guided to the reserved seat by using the mobile terminal, the display device in the store, and the automatic serving machine 21. Guides customers to their seats. Food orders can be placed using the customer's mobile terminal, the store's dedicated terminal, or the like. When there are no vacant seats, the system 1 presents the expected waiting time, the number of people waiting, etc. to the customer by displaying the waiting time on the customer's mobile terminal or displaying it on the display device of the store. , Ask for an answer as to whether the customer is waiting for a seat. If the customer chooses to wait for a vacant seat, the customer will be notified of the waiting status at any time, and information such as recommended menus and store introductions will be provided. According to the above, when it is the customer's turn, the seat will be guided to the vacant seat as described above.
 会計については携帯端末からの注文の場合には、携帯端末から電子的に決済を行うことができる。店舗の専用端末、音声認識装置による注文等の場合にも、顧客の携帯端末からの電子決済が可能であるが、店舗に自動決済装置を設けておけば、顧客がその自動決済装置を用いて、キャシュカード、クレジットカード、現金、電子マネー、プリペイドカード等の適宜の決済手段により、支払を行うことが可能である。店舗の出口にゲートを設けておき、顧客が所定のIDを提示すること、例えば携帯端末による近距離無線通信による顧客のIDを提示することにより、ゲートが開くようにしておけば、料金を未払いの顧客を識別して、ゲートの開閉を制御でき、また、音声ないし表示により未払いの顧客への料金の支払いを促すことができる。なお、料金を先払いするシステムを採用する場合には、顧客が未払いのまま店舗を出ることは防止できる。 Regarding accounting, in the case of an order from a mobile terminal, payment can be made electronically from the mobile terminal. Electronic payment is possible from the customer's mobile terminal even when placing an order using a store's dedicated terminal or voice recognition device, but if the store is equipped with an automatic payment device, the customer can use the automatic payment device. , Cash card, credit card, cash, electronic money, prepaid card and other appropriate payment methods can be used for payment. If a gate is provided at the exit of the store so that the customer can open the gate by presenting a predetermined ID, for example, by presenting the customer's ID by short-range wireless communication using a mobile terminal, the fee is unpaid. Customers can be identified and the opening and closing of gates can be controlled, and voice or display can be used to encourage unpaid customers to pay. If a system that pays the fee in advance is adopted, it is possible to prevent the customer from leaving the store without paying the fee.
 次に、食材の仕入れのための発注、在庫管理を自動化する手段、納品された食材が自動的に食材供給装置13ないし麺供給装置10に補充される手段について説明する。本実施例のシステム1は、全ての顧客の注文データに加え、他の店舗の注文データ、過去の注文履歴データ、天候データ、気温データ、湿度データ、カレンダー情報、イベント情報、人出予想情報、撮像手段を画像認識することによって得られた店内の混雑状況等の各種情報を把握しており、これらの情報から、当該店舗の来客数及び各メニューの注文を予測し、予め食材ないし麺の在庫を管理すると共に、食材の仕入れのための発注を自動化することができる。当該店舗の来客数及び各メニューの注文を予測のためには、各顧客の注文データ、過去の注文履歴データ、他店舗等の関連システムからの情報、情報調査機関の情報、及び、インターネットの情報等も利用可能であり、これらの多量の情報を例えば人工知能を用いた機械学習により解析することが可能である。 Next, a means for automating ordering for purchasing ingredients, inventory management, and a means for automatically replenishing the delivered ingredients to the ingredients supply device 13 or the noodle supply device 10 will be described. In the system 1 of this embodiment, in addition to the order data of all customers, the order data of other stores, the past order history data, the weather data, the temperature data, the humidity data, the calendar information, the event information, the crowd forecast information, We grasp various information such as congestion status in the store obtained by recognizing the image by the image pickup means, predict the number of visitors to the store and the order of each menu from this information, and stock the ingredients or noodles in advance. And can automate the ordering for the purchase of ingredients. In order to predict the number of visitors to the store and the order of each menu, order data of each customer, past order history data, information from related systems such as other stores, information of information research organizations, and information on the Internet Etc. are also available, and it is possible to analyze such a large amount of information by machine learning using artificial intelligence, for example.
 予め予測された注文に対応する食材等の在庫については、発注の内容に応じて、システム1の在庫管理にしたがって管理される。納品された在庫は、所定のストック場所に管理され、適宜食材供給装置13及び麺供給装置10等に適宜供給される。食材供給装置13及び麺供給装置10等への食材ないし麺の補充は、例えば自動配膳機21を用いて自動化することが可能である。 The inventory of foodstuffs, etc. corresponding to the predicted order is managed according to the inventory management of the system 1 according to the contents of the order. The delivered inventory is managed in a predetermined stock place, and is appropriately supplied to the food material supply device 13, the noodle supply device 10, and the like. The replenishment of foodstuffs or noodles to the foodstuff supply device 13 and the noodle supply device 10 and the like can be automated by using, for example, an automatic serving machine 21.
(実施例5)
 本実施例ではセンサの種類について例示する。糖度、味、重さ、塩分、硬さ(圧力センサ)、温度、湿度、時間等を分析するセンサにより、食材、調理器の状態を判定することができる。画像認識でも例えば色の変色の判定することにより、美味しさを検出することができる。また、例えばスープを補充してからの経過時間でも品質判定になる。これらのセンサは、必要に応じて、AI需要予測部11、AIフードロス予測部12、AI顧客分析部13、AI商品分析部14、AIスタッフ作業分析部15及びAI品質管理部16等における分析に用いられる。
(Example 5)
In this embodiment, the types of sensors are illustrated. The state of foodstuffs and cookers can be determined by sensors that analyze sugar content, taste, weight, salt content, hardness (pressure sensor), temperature, humidity, time, and the like. Also in image recognition, for example, by determining color change, deliciousness can be detected. In addition, for example, the quality can be judged by the elapsed time after replenishing the soup. These sensors can be used for analysis in AI demand forecasting unit 11, AI food loss forecasting unit 12, AI customer analysis unit 13, AI product analysis unit 14, AI staff work analysis unit 15, AI quality control unit 16, etc., as necessary. Used.
(実施例6)
 人(調理者)の動きも画像認識することにより、調理者に対して調理の状態を適正にアドバイスすることができる。必要に応じて、AIスタッフ作業分析部15及びAI品質管理部16等による分析が行われる。
(Example 6)
By recognizing the movement of a person (cooker) as an image, it is possible to properly advise the cook on the state of cooking. If necessary, the AI staff work analysis unit 15 and the AI quality control unit 16 and the like perform analysis.
(実施例7)
 スーパーに陳列された食品を画像認識して、売り上げ状況を把握する。店舗に陳列された商品の残量に加え、食品の状態も判定する。必要に応じて、AI需要予測部11、AIフードロス予測部12、AI顧客分析部13、AI商品分析部14及びAI品質管理部16等による分析が行われる。
(Example 7)
Image recognition of foods displayed at supermarkets to grasp the sales situation. In addition to the remaining amount of products displayed in the store, the condition of food is also determined. If necessary, analysis is performed by AI demand forecasting unit 11, AI food loss forecasting unit 12, AI customer analysis unit 13, AI product analysis unit 14, AI quality control unit 16, and the like.
(実施例8)
 本実施例では、AI品質管理部16における調理の過程の分析により、レタス又はキャベツを深層学習により認識し、この芯の部分だけを判断して、切り分ける例を説明する。キャベツ又はレタスの画像認識には、ディープラーニング(深層学習)を用いた。特に中核ネットワークとしてはResNet18を用い、基礎特徴抽出器としてはCNNを用い、フレームワークとしては、PyTorch, PyTorchVisionを用いた。訓練用の画像としては、日本国内で購入した玉レタスを用いた。
(Example 8)
In this embodiment, an example will be described in which lettuce or cabbage is recognized by deep learning by analysis of the cooking process in the AI quality control unit 16, and only the core portion is determined and separated. Deep learning was used for image recognition of cabbage or lettuce. In particular, ResNet18 was used as the core network, CNN was used as the basic feature extractor, and PyTorch and PyTorchVision were used as the framework. As the image for training, we used ball lettuce purchased in Japan.
(1)データセットの生成
 玉レタスの画像(73枚)にラベリングを行った。具体的には、カスタムデータセットに対し、玉レタスの写真を手作業でラベリングを行ってフォーマットを整えておく。具体的には、図8に示したように、レタスの芯の位置を2つの点で識別している。図8A~図8Dには、玉レタスをレベリングした例として、4件の画像が示されている。このレタスの芯を識別する2つの点は、レタスの芯に外接する長方形の1つの対角線上の一対の頂点となる。また、画像に対して、前処理として2D変換による水増しを行った。2D変換としては例えば、
・mult=1.0
・do_flip=True
・max_rotate=360
・pad_mode=‘reflection’
を含む。バッチサイズは16とした。
(1) Data set generation Labeling was performed on the images (73 sheets) of ball lettuce. Specifically, the photos of ball lettuce are manually labeled and formatted for the custom data set. Specifically, as shown in FIG. 8, the positions of the lettuce cores are identified by two points. 8A-8D show four images as an example of leveling ball lettuce. The two points that identify the lettuce core are a pair of diagonal vertices of a rectangle circumscribing the lettuce core. In addition, the image was inflated by 2D conversion as a pretreatment. As a 2D conversion, for example
・ Mult = 1.0
・ Do_flip = True
・ Max_rotate = 360
・ Pad_mode ='reflection'
including. The batch size was 16.
(2)学習
 ハイパーパラメタの設定のためにデータセット分割(テスト・バリデーション)を行い、複数の画像を、モデル構築に利用するための学習データ(73枚)と、ハイパーパラメタのチューニング(モデル評価)に用いる学習データ(31枚)とに分けた。中核ネットワークとしてはResNet18を用い、出力はResNet18に基づいたbboxの4頂点を出力とした。基礎特徴抽出器としてはCNNを用い、フレームワークとしては、PyTorch, PyTorchVisionを用いた。図10はレタスの画像認識の出力画像の説明図である。レタスの芯の周囲の4つの点は、bboxの長方形の4つの頂点に相当し、中央の点はbboxの2つの対角線の接点である。
(2) Learning Data set division (test validation) for setting hyperparameters, learning data (73 images) for using multiple images for model construction, and hyperparameter tuning (model evaluation) It was divided into the learning data (31 sheets) used for. ResNet18 was used as the core network, and the output was the four vertices of the bbox based on ResNet18. CNN was used as the basic feature extractor, and PyTorch and PyTorchVision were used as the framework. FIG. 10 is an explanatory diagram of an output image of lettuce image recognition. The four points around the lettuce core correspond to the four vertices of the bbox rectangle, and the central point is the point of contact of the two diagonals of the bbox.
(3)検証
 目視でのレタスの芯の判別率を評価し、手動でハイパーパラメタを評価、チューニングを行った。ハイパーパラメタのチューニングによる学習データとして、31枚の写真を用いた。
(3) Verification The discrimination rate of the lettuce core was evaluated visually, and the hyperparameters were manually evaluated and tuned. 31 photographs were used as learning data by tuning hyperparameters.
(4)結果
 図9にトレーニング結果を示す。実線が訓練の損失係数であり、破線が評価の損失係数である。訓練の損失係数及び評価の損失係数がともに、低下してX軸に平行になると学習が収束していることを示す。なお、出力の損失係数又は検証の損失係数のいずれかが上昇または不安定に上下すると、過剰学習であることを示す。出力の正解率は、IoU閾値0.75、 最大bbox数100の場合95.6%となった。
(4) Results Figure 9 shows the training results. The solid line is the training loss coefficient, and the broken line is the evaluation loss coefficient. When both the training loss coefficient and the evaluation loss coefficient decrease and become parallel to the X-axis, it indicates that the learning has converged. If either the output loss coefficient or the verification loss coefficient rises or falls erratically, it indicates overlearning. The correct answer rate of the output was 95.6% when the IoU threshold was 0.75 and the maximum number of bboxes was 100.
 図11は、レタスをカットするラインの説明図である。訓練済みのCNNから得られたbboxの4つの頂点の情報を用いて、図11に示すように、レタスの芯をカットするラインを決定する。まず、bboxの4つの頂点を通り楕円を描く。次に楕円に対して2対の接線により菱形を描く。次に、菱形のいずれかの対角に沿ってレタスを半分にカットする。半分にカットされた半レタスごとに、菱形の2つの辺に沿って二回切ることによって、芯を半レタスから取り除く。 FIG. 11 is an explanatory diagram of a line for cutting lettuce. Using the information of the four vertices of the bbox obtained from the trained CNN, the line to cut the lettuce core is determined as shown in FIG. First, draw an ellipse through the four vertices of bbox. Next, draw a rhombus with two pairs of tangents to the ellipse. Then cut the lettuce in half along one of the diagonals of the diamond. Remove the core from the half lettuce by cutting twice along the two sides of the rhombus for each half lettuce cut in half.
 玉レタスの写真によって訓練したCNNによって、キャベツを画像認識し、そのキャベツの芯を判別することも可能であることを確認した。芯の断面が正面方向を向いている画像であれば、キャベツ又はレタスの種類によらず高い確率で芯を判別することが分かった。玉レタスで訓練したCNNによって、判別可能なものは、寒玉、春キャベツ、紫キャベツ系、グリーンボール系等市販されているほぼすべてのキャベツ、及び、結球する玉レタスの種類、撮影条件にもよるが非結球の掻きチシャ、立ちチシャ、葉チシャ等多くの種類のキャベツ又はレタスに対応可能である。ただし、非結球のレタスについては、芯の断面が正面を向いている画像であることが、判別可能の条件となる。 It was confirmed that it is possible to recognize the cabbage as an image and identify the core of the cabbage by the CNN trained by the photograph of the ball lettuce. It was found that if the cross section of the core is an image facing the front direction, the core is discriminated with high probability regardless of the type of cabbage or lettuce. By the CNN trained with ball lettuce, what can be identified is almost all commercially available cabbage such as cold ball, spring cabbage, purple cabbage, green ball, type of ball lettuce to be headed, and shooting conditions. However, it can be used for many types of cabbage or lettuce such as non-heading scratched chisha, standing chisha, and leaf chisha. However, for non-headed lettuce, it is a condition that it can be discriminated that the cross section of the core faces the front.
(5)実施例8の別実施例(レタスの芯の形状も認識する例)
 上記レタスの認識ではbboxの4頂点を出力としたが、この別実施例ではレタスの芯の形状を認識する例を説明する。中核ネットワークとしてはResNet50+FPNを用い、ヘッドとしては、マスクをconv、ボックスをFCに設定した。基礎特徴抽出器としてはCNNを用い、フレームワークとしては、Detectron2を用いた。
(5) Another Example of Example 8 (Example of recognizing the shape of the lettuce core)
In the above lettuce recognition, the four vertices of the bbox are output, but in another embodiment, an example of recognizing the shape of the lettuce core will be described. ResNet50 + FPN was used as the core network, and the mask was set to conv and the box was set to FC as the head. CNN was used as the basic feature extractor, and Detectron2 was used as the framework.
 実施例8と同様に、ハイパーパラメタの設定のためにデータセット分割(テスト・バリデーション)を行い、複数の画像を、モデル構築に利用するための学習データ(73枚)と、ハイパーパラメタのチューニング(モデル評価)に用いる学習データ(31枚)とに分けた。図12はレタスのラベリングの説明図である。図12に示すように、玉レタスの写真を手作業でラベリングを行ってフォーマットを整えておく。図12Aはラベリング前の学習用写真であり、図12Bはラベリング後の学習用写真である。図12Bでは、芯の部分が不定形でマーキングされている。出力は複数の線分を組み合わせた多角形であり、不定形である。 Similar to Example 8, data set division (test validation) is performed to set hyperparameters, and learning data (73 images) for using a plurality of images for model construction and hyperparameter tuning (situation). It was divided into learning data (31 sheets) used for model evaluation). FIG. 12 is an explanatory diagram of lettuce labeling. As shown in FIG. 12, the photographs of ball lettuce are manually labeled and formatted. FIG. 12A is a learning photograph before labeling, and FIG. 12B is a learning photograph after labeling. In FIG. 12B, the core portion is marked with an irregular shape. The output is a polygon that combines multiple line segments and is indeterminate.
 図13はレタスの画像認識のトレーニング結果の説明図である。実線が訓練の損失係数であり、破線が評価の損失係数である。出力の損失係数及び検証の損失係数がともに、低下してX軸に平行になると学習が収束していることを示す。なお、訓練の損失係数又は評価の損失係数のいずれかが上昇または不安定に上下すると、過剰学習であることを示す。 FIG. 13 is an explanatory diagram of the training result of lettuce image recognition. The solid line is the training loss coefficient, and the broken line is the evaluation loss coefficient. When both the output loss coefficient and the verification loss coefficient decrease and become parallel to the X-axis, it indicates that the learning has converged. If either the training loss coefficient or the evaluation loss coefficient rises or falls in an unstable manner, it indicates overlearning.
 図14はレタスの画像認識の正解率の説明図である。IoU (Intersection over Union) thresholdが0.5又は0.75の場合、正解率は0.956であり、IoU thresholdが1.0(All)の場合、正解率は0.796となった。ここでIoU thresholdは出力のマスクの面積中で、教示用写真のマスクの面積に重なっている部分の面積の割合である。図15は、レタスの画像認識の出力映像の説明図である。図15A~図15Dには4つの出力画像が例示されている。各画像は、芯の部分が不定形で囲われてマスクされ、このマスクに外接するように長方形のboxが表示される。 FIG. 14 is an explanatory diagram of the correct answer rate of lettuce image recognition. When the IoU (Intersection over Union) threshold was 0.5 or 0.75, the correct answer rate was 0.956, and when the IoU threshold was 1.0 (All), the correct answer rate was 0.796. Here, IoU threshold is the ratio of the area of the area of the output mask that overlaps the area of the mask in the teaching photograph. FIG. 15 is an explanatory diagram of an output image of lettuce image recognition. Four output images are exemplified in FIGS. 15A to 15D. Each image is masked with its core surrounded by an irregular shape, and a rectangular box is displayed so as to circumscribe this mask.
 図16は、キャベツ又はレタスの芯を切り取る裁断線の決定方法の説明図である。まず、予測されたboxの中心を基準とし、対角線の長さを両端に5%伸ばす。次に、拡大されたboxを描き、裁断プロセスに移る。裁断プロセスでは、まずboxの長方形のいずれかの対角線に沿ってキャベツ又はレタスを半分にする。半分毎に長方形の各辺に沿って二回切り、芯を切り取る。本別実施例においても、上記実施例8と同様に玉レタスの写真によって訓練されたCNNによって、玉レタス以外のキャベツ又はレタスの芯を識別することが可能であり、非結球のレタスについては芯の断面が正面を向いている画像であることが判別可能の条件となるものの、市販されているほとんどのキャベツ又はレタスの芯を判別することが可能である。 FIG. 16 is an explanatory diagram of a method for determining a cutting line for cutting a core of cabbage or lettuce. First, with reference to the predicted center of the box, extend the diagonal length by 5% at both ends. Next, draw an enlarged box and move on to the cutting process. The cutting process first halves the cabbage or lettuce along the diagonal of one of the box's rectangles. Cut every half twice along each side of the rectangle and cut the core. In another example as well, it is possible to identify the core of cabbage or lettuce other than the ball lettuce by the CNN trained by the photograph of the ball lettuce as in the above-mentioned Example 8, and the core of the non-headed lettuce. Although it is a condition that it is possible to discriminate the image whose cross section faces the front, it is possible to discriminate the core of most commercially available cabbage or lettuce.
 本実施例では、キャベツ又はレタスの芯を判別する例を挙げて説明したが、本実施例の切り分け部分判別方法は、キャベツ又はレタスに限定されるものではなく、野菜、肉、魚等の他の食材や料理の切り分けにも同様に適用可能である。 In this embodiment, an example of discriminating the core of cabbage or lettuce has been described, but the method for discriminating the separated portion of the present embodiment is not limited to cabbage or lettuce, and other than vegetables, meat, fish and the like. It can also be applied to the separation of ingredients and dishes.
(実施例9)
 本実施例では、AI品質管理部16における調理の過程の分析により、寿司の具を画像認識し、器ないし容器にきれいに盛り付ける工程を説明する。寿司の具は形が不定である。また、色合いから画像認識によりネタを認識する。寿司の種類と数を認識する。例えばエビとタマゴの画像認識は、教師あり学習により行う。
(Example 9)
In this embodiment, a process of recognizing an image of sushi ingredients by analyzing the cooking process in the AI quality control unit 16 and arranging them neatly in a container or a container will be described. The shape of sushi ingredients is indefinite. In addition, the material is recognized by image recognition from the hue. Recognize the type and number of sushi. For example, image recognition of shrimp and eggs is performed by supervised learning.
 寿司の画像認識には、ディープラーニング(深層学習)を用いた。特に中核ネットワークとしてはResNet50を用い、基礎特徴抽出器としてはCNNを用い、フレームワークとしては、Detectron2を用いた。 Deep learning was used for image recognition of sushi. In particular, ResNet50 was used as the core network, CNN was used as the basic feature extractor, and Detectron2 was used as the framework.
(1)データセットの生成
 寿司の画像(80枚)にラベリングを行った。具体的には、カスタムデータセットに対し、寿司の写真を手作業でラベリングを行ってCOCOフォーマットに変換しておく。また、画像に対して、前処理として2D変換、例えば、ランダムに拡縮率(0.5~1.5)、ランダムに複数箇所で画像の一部を切り出す、ランダムに回転する等の処理を行い、さらに、照明の条件を考慮して平均値と標準偏差で正規化を行い、画像数を増加(80枚から300枚程度へ増加)させた。
(1) Data set generation Labeling was performed on sushi images (80 sheets). Specifically, the photos of sushi are manually labeled and converted to COCO format for the custom data set. In addition, the image is subjected to 2D conversion as preprocessing, for example, processing such as randomly scaling ratio (0.5 to 1.5), randomly cutting out a part of the image at multiple points, randomly rotating, and further lighting. The number of images was increased (increased from 80 to about 300) by normalizing with the mean value and standard deviation in consideration of the above conditions.
(2)学習
 ハイパーパラメタの設定のためにデータセット分割(テスト・バリデーション、例えばTrain-Test-Validation分割)を行い、複数の画像を、モデル構築に利用するための学習データと、ハイパーパラメタのチューニングに利用(モデル評価)に用いる学習データと、モデルの最終評価に利用するテストデータとに分けた。例えば、水増した寿司画像を、訓練用100証用100枚、テスト用100枚に別けた。また、画像データは、COCOフォーマットに最適化して利便性を向上した。なお、イテレーションは3000回として。フレームワークは、Detectron2を用いた。
 ハイパーパラメタ等の各種オプションの設定は、
・バッチサイズ 2、nは整数である
・シャッフル なし
・ネットワーク入力サイズ 入力された全画像
・出力 mask、bbox及びラベル
とした。
(2) Learning Data set division (test validation, for example, Train-Test-Validation division) is performed to set hyperparameters, and learning data for using multiple images for model construction and hyperparameter tuning. It was divided into training data used for use (model evaluation) and test data used for final evaluation of the model. For example, the inflated sushi images were divided into 100 for training and 100 for testing. In addition, the image data has been optimized for the COCO format to improve convenience. The iteration is 3000 times. Detectron2 was used as the framework.
For setting various options such as hyperparameters,
-Batch size 2 n and n are integers-No shuffle-Network input size All input images-Output mask, bbox and label.
(3)検証
 目視での寿司ネタ判別率を評価し、手動でハイパーパラメータを評価、チューニングを行った。ハイパーパラメタの設定は次のとおりとした。
・過剰学習をさけるには500回以下にした。
・学習にパラメータの影響が見えるように交差検証を行った。
・number of classes 20 (寿司の種類20種類)
(3) Verification The sushi material discrimination rate was evaluated visually, and the hyperparameters were manually evaluated and tuned. The hyperparameter settings are as follows.
・ To avoid overlearning, the number of times was 500 or less.
・ Cross-validation was performed so that the influence of parameters could be seen on learning.
・ Number of classes 20 (20 types of sushi)
(4)結果
 図17は寿司ネタの認識結果の説明図である。寿司ネタの種類判別の正解率はIoU閾値0.75、 最大bbox数100の場合、96.02%を得た。図17において各寿司ネタが四角いマーカーで囲まれ、画像認識されている様子が示されている。寿司は形状や色具合が似通っていることも多いが、上記のように訓練されたAIによって、寿司ネタが正確に画像認識されている。
(4) Results Figure 17 is an explanatory diagram of the recognition results of sushi ingredients. The correct answer rate for determining the type of sushi material was 0.75 with an IoU threshold and 96.02% when the maximum number of bboxes was 100. In FIG. 17, each sushi material is surrounded by a square marker, and the state in which the image is recognized is shown. Sushi is often similar in shape and color, but the AI trained as described above accurately recognizes the sushi material.
 このように画像認識された寿司は器ないし容器に色彩や形状などを考慮して盛り付けられる。この盛付にも、良い盛付の情報、避けるべき盛付例の情報等によって訓練されたAIを用いることにより、盛付の適否の度合いを判定し、また、適切な盛付情報をAIから出力することもできる。 Sushi image-recognized in this way is served on a container or container in consideration of color and shape. For this assortment, AI trained with good assortment information, information on avoidable assortment examples, etc. is used to judge the degree of suitability of assortment, and appropriate assortment information is obtained from AI. It can also be output.
(5)実施例9の別実施例
 寿司ネタの中でマグロの赤身、中トロ、大トロを見分ける必要があるが、調理者にとってはマグロの身の状態を見極めるには、多くの知識と熟練の調理技術が必要とされる。そこで、ここでは深層学習によってマグロの赤身、中トロ、大トロを見分ける方法について説明する。中核ネットワークとしてはResNet18を用い、基礎特徴抽出器としてはCNNを用い、フレームワークとしては、PyTorchを用いた。
(5) Another Example of Example 9 It is necessary to distinguish between lean tuna, medium fatty tuna, and large fatty tuna in sushi ingredients, but for cooks, a lot of knowledge and skill are required to determine the physical condition of tuna. Cooking skills are required. Therefore, here we explain how to distinguish red tuna, medium fatty tuna, and large fatty tuna by deep learning. ResNet18 was used as the core network, CNN was used as the basic feature extractor, and PyTorch was used as the framework.
 データセットの生成の際に、上記(1)と同様に寿司の画像(80枚)にラベリングを行った。具体的には、カスタムデータセットに対し、寿司の写真を手作業でラベリングを行ってVOCフォーマットに変換しておく。また、画像に対して、前処理として上記(1)と同様に2D変換の処理を行い、さらに、照明の条件を考慮して平均値と標準偏差で正規化を行い、画像数を増加(80枚から300枚程度へ増やす2D変換とする)。ただし、照明の条件が同一であれば、正規化は必要ない。 At the time of data set generation, sushi images (80 sheets) were labeled in the same manner as in (1) above. Specifically, the sushi photos are manually labeled and converted to VOC format for the custom dataset. In addition, the image is subjected to 2D conversion processing as the preprocessing in the same manner as in (1) above, and further normalized by the mean value and standard deviation in consideration of the lighting conditions, and the number of images is increased (80). 2D conversion to increase from 1 sheet to about 300 sheets). However, if the lighting conditions are the same, normalization is not necessary.
 学習においては、ハイパーパラメタの設定のためにデータセット分割を行い、複数の画像を、モデル構築に利用するための学習データと、ハイパーパラメタのチューニングに用いる学習データと、モデルの最終評価に利用するテストデータとに分けた。例えば、水増した寿司画像を、訓練用100枚、検証用100枚、テスト用100枚に別けた。ハイパーパラメタ等の各種オプションとしては、
・バッチサイズ 16
・シャッフル あり
・ネットワーク入力サイズ32n*32n(n=整数)
・フレームワーク PyTorch
・基礎特徴抽出機 CNN
・出力 ラベル
とした。
In training, data set division is performed to set hyperparameters, and multiple images are used for training data for use in model construction, training data for tuning hyperparameters, and final evaluation of the model. Separated from test data. For example, the inflated sushi images were divided into 100 for training, 100 for verification, and 100 for testing. As various options such as hyperparameters,
Batch size 16
・ With shuffle ・ Network input size 32n * 32n (n = integer)
・ Framework PyTorch
・ Basic feature extractor CNN
-It was used as an output label.
 検証の結果として、ラベル判別の精度は
赤身:99.39%
中とろ:98.23%
大トロ:95.18%
全体平均:97.61%
が得られた。このように訓練されたAIによって、類似している寿司ネタの種類が正確に画像認識されている。ここでは、マグロの赤身、中トロ、大トロの区別を認識したが、同様にして、マグロの種類の判別、クロマグロ、ミナミマグロ、メバチマグロ等の種類を判別することも可能である。さらに、マグロだけに限らず、ブリの種類、例えばイナダ、ワラサ、ブリ等の種類の判別も可能である。実施例9又は別実施例を応用して、さまざまな魚の種類の判別に利用することができる。
As a result of verification, the accuracy of label discrimination is lean: 99.39%
Medium Toro: 98.23%
Daitoro: 95.18%
Overall average: 97.61%
was gotten. With the AI trained in this way, similar types of sushi ingredients are accurately image-recognized. Here, the distinction between lean tuna, medium fatty tuna, and large fatty tuna was recognized, but in the same manner, it is also possible to discriminate the type of tuna, the type of bluefin tuna, southern bluefin tuna, bigeye tuna, and the like. Furthermore, it is possible to discriminate not only tuna but also types of yellowtail, such as yellowtail, yellowtail, and yellowtail. By applying Example 9 or another example, it can be used to discriminate various fish types.
 本実施例では、寿司ネタの判別の例を挙げて説明したが、本実施例の食材判別方法は、寿司ネタに限定されるものではなく、魚、肉、野菜等の他の食材や料理にも同様に適用可能である。 In this embodiment, an example of discriminating sushi ingredients has been described, but the method for discriminating ingredients in this embodiment is not limited to sushi ingredients, but can be applied to other ingredients and dishes such as fish, meat, and vegetables. Is also applicable.
[実施形態2]
 実施形態2では、AIプラットフォームを備える学習制御システムを用いて、店舗における管理者(店長など)による管理を補助するためのAIアシスタント・ソリューション、店舗最適化プラットフォームの提供について説明する。
[Embodiment 2]
In the second embodiment, the provision of an AI assistant solution and a store optimization platform for assisting management by a manager (store manager or the like) in a store by using a learning control system provided with an AI platform will be described.
 まず、AIの画像認識技術を活用したホール状況の認識について説明する。ホール状況の認識のために、在庫通知アシスタント、下膳通知アシスタント、来客通知及び需要予測アシスタント、メニュー推奨アシスタント等が用いられる。これらのアシスタントにおいては、実施形態1で説明した、AIプラットフォーム10のAI需要予測部11、AIフードロス予測部12、AI顧客分析部13、AI商品分析部14、AIスタッフ作業分析部15及びAI品質管理部16等において学習済みのモデルが用いられる。 First, we will explain the recognition of the hall situation using AI's image recognition technology. Inventory notification assistant, lower set notification assistant, visitor notification and demand forecasting assistant, menu recommendation assistant, etc. are used to recognize the hall situation. In these assistants, the AI demand forecasting unit 11, AI food loss forecasting unit 12, AI customer analysis unit 13, AI product analysis unit 14, AI staff work analysis unit 15, and AI quality of the AI platform 10 described in the first embodiment. A trained model is used in the management unit 16 and the like.
 セルフサービスのサラダバーなどの在庫不足をスタッフが頻繁に確認できておらず、クレームに繋がっている。そこで、サラダバーなど、在庫状況を認識し不足する前に店長又はスタッフに通知することで、在庫の最適化をすることが可能となる。これにより、回転率向上、顧客満足度向上が見込まれる。 The staff did not frequently confirm the lack of inventory of self-service salad bars, etc., leading to complaints. Therefore, it is possible to optimize the inventory by recognizing the inventory status of the salad bar and notifying the store manager or the staff before the inventory status becomes insufficient. This is expected to improve turnover and customer satisfaction.
 顧客への食事の提供にスタッフが追われ、食器下膳が後回しになり、空席でも顧客を入り口で待たせることがある。そこで、下膳通知アシスタントが、食器下膳が必要な席を通知することで、店舗回転率を最大化することができ、売上向上をすることが可能である。これにより、回転率向上と売上向上が図れる。 The staff was chased by the provision of meals to the customer, the tableware set was postponed, and the customer may be kept waiting at the entrance even if the seat was vacant. Therefore, the lower set notification assistant can notify the seats that require the tableware lower set, thereby maximizing the store turnover rate and improving sales. As a result, the turnover rate and sales can be improved.
 顧客の来店に気付かずにクレームに繋がったり、一旦は入店した顧客がそのまま帰ってしまい、損失に繋がったりすることがあった。そこで、来客通知及び需要予測アシスタントは、顧客の待ち状況や過去からの需要を通知することで、機会ロスやフードロスを軽減することが可能となり、顧客満足度向上、機会ロスやフードロスの軽減を図れる。 In some cases, it led to complaints without noticing the customer's visit, or the customer who entered the store returned as it was, leading to loss. Therefore, the visitor notification and demand forecast assistant can reduce opportunity loss and food loss by notifying the customer's waiting status and demand from the past, and can improve customer satisfaction and reduce opportunity loss and food loss. ..
 顧客に飲み物が足りていないことに店員が気付けていない場合がある。そこで、メニュー推奨アシスタントが、顧客のグラスが空又は残り少ない時に、店長又はスタッフに通知することで、機会ロスを軽減し、売上の最大化し、同時に顧客満足度の向上をすることができる。 The clerk may not be aware that the customer does not have enough drinks. Therefore, the menu recommendation assistant can reduce the opportunity loss, maximize the sales, and at the same time improve the customer satisfaction by notifying the store manager or the staff when the customer's glass is empty or low.
 厨房内での調理状況の認識のためには、調理再現アシスタント等が用いられる。調理者によって提供メニューの美味しさが異なり、クレームに繋がることがあった。そこで、調理再現アシスタントにより、上級調理者の調理方法をAIが学習し、見習い調理者でも再現できるよう調理方法を教示することができる。例えば、実施形態1の実施例1のステーキの調理の例を採用することにより、顧客満足度向上、味均一化によるブランド力の維持、向上を図ることができる。 A cooking reproduction assistant, etc. is used to recognize the cooking situation in the kitchen. The deliciousness of the menu offered differed depending on the cook, which sometimes led to complaints. Therefore, with the cooking reproduction assistant, AI can learn the cooking method of a senior cook and teach the cooking method so that even an apprentice cook can reproduce it. For example, by adopting the example of cooking the steak of the first embodiment of the first embodiment, it is possible to improve customer satisfaction and maintain and improve the brand power by making the taste uniform.
 実施形態1で説明したAIプラットフォーム10が備える、AI需給予測部11、AI自動フードロス予測部12、AI顧客分析部13、AI商品分析部14、AIスタッフ作業分析部15、及び、AI品質管理部16等の機能により、上述のようなAIアシスタント・ソリューション、店舗最適化プラットフォームの提供が実現できる。例えば、調理再現アシスタントは、AI品質管理部16において上級調理者の調理方法によって訓練された学習済みモデルを用いて、店舗状況に応じた適切な調理方法を各店舗の調理者に対して教示することができる。これにより、上級調理者の調理と同様で、かつ、均質な料理を各店舗で提供することが可能となる。 The AI supply / demand forecasting unit 11, the AI automatic food loss forecasting unit 12, the AI customer analysis unit 13, the AI product analysis unit 14, the AI staff work analysis unit 15, and the AI quality control unit provided in the AI platform 10 described in the first embodiment. With functions such as 16, it is possible to provide the AI assistant solution and store optimization platform as described above. For example, the cooking reproduction assistant teaches the cooks of each store the appropriate cooking method according to the store situation by using the trained model trained by the cooking method of the senior cook in the AI quality control unit 16. be able to. This makes it possible to provide uniform dishes at each store, which are similar to those cooked by senior cooks.
 以上、本発明の実施形態について説明したが、これらの実施形態及び実施例は本発明の技術思想を具体化するための学習制御システムを例示するものであって、本発明をこれらに特定するものではなく、その他の実施形態及び実施例のものにも等しく適用し得るものであり、また、これらの実施形態及び実施例の一部を省略、追加、変更することや、各実施形態及び各実施例を組み合わせることも可能である。 Although the embodiments of the present invention have been described above, these embodiments and examples exemplify a learning control system for embodying the technical idea of the present invention, and specify the present invention to these. However, it can be equally applied to those of other embodiments and examples, and parts of these embodiments and examples may be omitted, added, or changed, and each embodiment and each embodiment may be omitted. It is also possible to combine examples.
10…AIプラットフォーム    11…需要予測部
12…AI自動フードロス予測部  13…AI顧客分析部
14…AI商品分析部       15…AIスタッフ作業分析部
16…AI品質管理部       20…情報ネットワーク
21…調理ロボット        22…業務自動化AIロボット
23…AIレストラン
10 ... AI platform 11 ... Demand forecasting department 12 ... AI automatic food loss forecasting department 13 ... AI customer analysis department 14 ... AI product analysis department 15 ... AI staff work analysis department 16 ... AI quality control department 20 ... Information network 21 ... Cooking robot 22 … Business automation AI robot 23… AI restaurant

Claims (10)

  1. (a)売り上げの予測をする需要予測モジュール、
    (b)物体を認識する画像認識モジュール、
    (c)物体の状態を音声により認識する音声認識モジュール、
    (d)顧客へ商品を推奨する推奨モジュール、又は、
    (e)機器の異常事態を検知する異常検知モジュール
    の少なくとも1つの機械学習モジュールを備えると共に、さらに、
    (f)上記(a)~(e)以外の機械学習を行うこと、及び/又は、上記(a)~(e)の少なくとも1つのモジュールの情報や結果を利用する学習モジュールを備えるAIプラットフォームを有し、
     前記AIプラットフォームにより訓練されたモデルを用いた学習制御を行うことを特徴とする学習制御システム。
    (A) Demand forecast module for forecasting sales,
    (B) Image recognition module that recognizes objects,
    (C) A voice recognition module that recognizes the state of an object by voice,
    (D) A recommended module that recommends products to customers, or a recommended module
    (E) A machine learning module of at least one abnormality detection module for detecting an abnormal situation of a device is provided, and further, the abnormality detection module is provided.
    (F) An AI platform including a learning module that performs machine learning other than the above (a) to (e) and / or uses the information and results of at least one of the above (a) to (e). Have and
    A learning control system characterized by performing learning control using a model trained by the AI platform.
  2.  前記AIプラットフォームにおける機械学習として、少なくとも教師あり学習、教師無し学習又は強化学習のいずれか1つの学習、及び/又は、深層学習を含み、
     店舗の管理、店舗予約、注文ないし決済、食材整理、食材供給、仕込み、調理、盛付、提供ないし配膳、下膳ないし洗浄、下準備、又は、仕入れのいずれか少なくとも1つを実行可能なことを特徴とする請求項1に記載の学習制御システム。
    Machine learning in the AI platform includes at least one of supervised learning, unsupervised learning or reinforcement learning, and / or deep learning.
    Being able to perform at least one of store management, store reservation, order or settlement, food arrangement, food supply, preparation, cooking, serving, serving or serving, serving or cleaning, preparation, or purchasing. The learning control system according to claim 1.
  3.  前記深層学習にはニューラルネットワークとしてCNNを用い、前処理としてラベリング及び2D変換による教示データの水増しを行うことを特徴とする請求項2に記載の学習制御システム。 The learning control system according to claim 2, wherein CNN is used as a neural network for the deep learning, and the teaching data is inflated by labeling and 2D conversion as preprocessing.
  4.  前記深層学習の前処理において、RGB系列の信号、HSV系列の信号、赤外線画像の信号を用いることを特徴とする請求項2又は3に記載の学習制御システム。 The learning control system according to claim 2 or 3, wherein the RGB series signal, the HSV series signal, and the infrared image signal are used in the preprocessing of the deep learning.
  5.  食材の調理工程の画像に基づいて深層学習によって訓練されたニューラルネットワークによって、入力された食材の画像から、当該食材の容積、重さ、脂質の割合、カロリー、表面積、温度差、加熱温度、調理タイミング、焼調理の裏返すタイミング、仕上がりタイミング、又は、食材の切り分けラインの少なくとも1つを出力することを特徴とする請求項1~4のいずれか1項に記載の学習制御システム。 By a neural network trained by deep learning based on the image of the cooking process of the food, from the input image of the food, the volume, weight, proportion of fat, calorie, surface area, temperature difference, heating temperature, cooking of the food. The learning control system according to any one of claims 1 to 4, wherein the timing, the timing of turning over the cooking, the timing of finishing, or at least one of the food cutting lines is output.
  6.  複数種類の食材の画像に基づいて食材の種類を深層学習によって訓練されたニューラルネットワークによって、食材の種類を画像認識することを特徴とする請求項1~4のいずれか1項に記載の学習制御システム。 The learning control according to any one of claims 1 to 4, wherein the type of food is recognized as an image by a neural network trained by deep learning based on images of a plurality of types of food. system.
  7.  食材の画像に基づいて食材の切り分けラインを深層学習によって訓練されたニューラルネットワークによって、食材の切り分けラインを画像認識することを特徴とする請求項1~4のいずれか1項に記載の学習制御システム。 The learning control system according to any one of claims 1 to 4, wherein the food separation line is image-recognized by a neural network trained by deep learning based on the food image. ..
  8.  在庫通知アシスタント、下膳通知アシスタント、来客通知ないし需要予測アシスタント、メニュー推奨アシスタント、又は、調理再現アシスタントの少なくとも1つのアシスタントを含み、店舗の管理に用いられることを特徴とする請求項1~7のいずれか1項に記載の学習制御システム。 Claims 1-7, comprising at least one of an inventory notification assistant, a lower set notification assistant, a visitor notification or demand forecasting assistant, a menu recommendation assistant, or a cooking reproduction assistant, which is used for store management. The learning control system according to any one of the items.
  9.  顧客に提供する商品として、イートインメニュー、テイクアウトメニュー、料理、ソフトドリンク、アルコールドリンク、ホットドリンク、又は、コールドドリンクの少なくとも1つを含むことを特徴とする請求項1~8のいずれか1項に記載の学習制御システム。 The item according to any one of claims 1 to 8, wherein the product provided to the customer includes at least one of an eat-in menu, a take-out menu, a dish, a soft drink, an alcoholic drink, a hot drink, or a cold drink. Described learning control system.
  10.  店舗として、飲食店、移動店舗、デリバリー店舗、仮設店舗、イートインコーナー、フードコート、宿泊施設、学校、病院、食堂、スーパーマーケット、デパート、量販店、商店、又は、コンビニエンスストアの少なくとも1つを含むことを特徴とする請求項1~9のいずれか1項に記載の学習制御システム。 Stores include at least one of restaurants, mobile stores, delivery stores, temporary stores, eat-in corners, food courts, accommodation facilities, schools, hospitals, cafeterias, supermarkets, department stores, mass retailers, shops, or convenience stores. The learning control system according to any one of claims 1 to 9, wherein the learning control system is characterized.
PCT/JP2021/028444 2020-07-31 2021-07-30 Learning control system WO2022025282A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020131272 2020-07-31
JP2020-131272 2020-07-31

Publications (1)

Publication Number Publication Date
WO2022025282A1 true WO2022025282A1 (en) 2022-02-03

Family

ID=80036445

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/028444 WO2022025282A1 (en) 2020-07-31 2021-07-30 Learning control system

Country Status (1)

Country Link
WO (1) WO2022025282A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116742519A (en) * 2023-08-10 2023-09-12 宗汉电通技术(深圳)有限公司 GIS equipment dustless installation environment intelligent management system based on panorama monitoring

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9454732B1 (en) * 2012-11-21 2016-09-27 Amazon Technologies, Inc. Adaptive machine learning platform
JP2017506169A (en) * 2014-02-20 2017-03-02 マーク オレイニク Method and system for food preparation in a robot cooking kitchen
WO2019028269A2 (en) * 2017-08-02 2019-02-07 Strong Force Iot Portfolio 2016, Llc Methods and systems for detection in an industrial internet of things data collection environment with large data sets
WO2019055555A1 (en) * 2017-09-12 2019-03-21 Nantomics, Llc Few-shot learning based image recognition of whole slide image at tissue level
JP2019053433A (en) * 2017-09-13 2019-04-04 ヤフー株式会社 Prediction apparatus, prediction method, and prediction program
WO2019068616A1 (en) * 2017-10-02 2019-04-11 Imec Vzw Secure broker-mediated data analysis and prediction
JP2019075009A (en) * 2017-10-18 2019-05-16 パナソニックIpマネジメント株式会社 Work support system, kitchen support system, work support method, and program
JP2020109614A (en) * 2018-12-28 2020-07-16 キヤノン株式会社 Image processing apparatus, image processing system, image processing method, and program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9454732B1 (en) * 2012-11-21 2016-09-27 Amazon Technologies, Inc. Adaptive machine learning platform
JP2017506169A (en) * 2014-02-20 2017-03-02 マーク オレイニク Method and system for food preparation in a robot cooking kitchen
WO2019028269A2 (en) * 2017-08-02 2019-02-07 Strong Force Iot Portfolio 2016, Llc Methods and systems for detection in an industrial internet of things data collection environment with large data sets
WO2019055555A1 (en) * 2017-09-12 2019-03-21 Nantomics, Llc Few-shot learning based image recognition of whole slide image at tissue level
JP2019053433A (en) * 2017-09-13 2019-04-04 ヤフー株式会社 Prediction apparatus, prediction method, and prediction program
WO2019068616A1 (en) * 2017-10-02 2019-04-11 Imec Vzw Secure broker-mediated data analysis and prediction
JP2019075009A (en) * 2017-10-18 2019-05-16 パナソニックIpマネジメント株式会社 Work support system, kitchen support system, work support method, and program
JP2020109614A (en) * 2018-12-28 2020-07-16 キヤノン株式会社 Image processing apparatus, image processing system, image processing method, and program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116742519A (en) * 2023-08-10 2023-09-12 宗汉电通技术(深圳)有限公司 GIS equipment dustless installation environment intelligent management system based on panorama monitoring
CN116742519B (en) * 2023-08-10 2024-01-19 宗汉电通技术(深圳)有限公司 GIS equipment dustless installation environment intelligent management system based on panorama monitoring

Similar Documents

Publication Publication Date Title
US11556889B2 (en) Object recognition system for an appliance and method for managing household inventory of consumables
US7973642B2 (en) RFID food production, inventory and delivery management method for a restaurant
US20200334628A1 (en) Food fulfillment with user selection of instances of food items and related systems, articles and methods
US20070251521A1 (en) RFID food production, inventory and delivery management system for a restaurant
US20190066239A1 (en) System and method of kitchen communication
US10949935B2 (en) System and method for implementing a centralized customizable operating solution
JP7500115B2 (en) Image-based method, device and system for classifying and selling packaged meat
CN113614763A (en) Integrated foreground and background restaurant automation system
KR102342184B1 (en) Cafeteria management system
EP3545485A1 (en) Self-shopping refrigerator
CN109636550A (en) A kind of DIY intelligent cooking control method and system
WO2022025282A1 (en) Learning control system
JP2019045980A (en) Information processing apparatus, information processing method, and program
JP7376489B2 (en) Methods and systems for classifying foods
US11562338B2 (en) Automated point of sale systems and methods
JP2022530263A (en) Food measurement methods, equipment and programs
CN107122926A (en) Method and its system that cooking food remotely places an order
US20230145313A1 (en) Method and system for foodservice with instant feedback
US20230178212A1 (en) Method and system for foodservice with iot-based dietary tracking
Sturm et al. Examining Consumer Responses to Calorie Information on Restaurant Menus in a Discrete Choice Experiment
CN115439908A (en) Face recognition self-service weighing consumption system
EP4271969A1 (en) Food processing system
TW202133082A (en) Fried food display management device and fried food display management method
KR20210049704A (en) A method, device and program for measuring food
TW202129563A (en) Storage tank management system and storage tank management method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21850846

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21850846

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP