US20210174169A1 - Method to predict food color and recommend changes to achieve a target food color - Google Patents
Method to predict food color and recommend changes to achieve a target food color Download PDFInfo
- Publication number
- US20210174169A1 US20210174169A1 US17/180,451 US202117180451A US2021174169A1 US 20210174169 A1 US20210174169 A1 US 20210174169A1 US 202117180451 A US202117180451 A US 202117180451A US 2021174169 A1 US2021174169 A1 US 2021174169A1
- Authority
- US
- United States
- Prior art keywords
- formula
- color
- vector
- initial
- recommended
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims description 89
- 235000002864 food coloring agent Nutrition 0.000 title claims description 20
- 239000004615 ingredient Substances 0.000 claims abstract description 160
- 235000013305 food Nutrition 0.000 claims abstract description 108
- 238000010801 machine learning Methods 0.000 claims abstract description 16
- 239000013598 vector Substances 0.000 claims description 67
- 230000006870 function Effects 0.000 claims description 62
- 238000003062 neural network model Methods 0.000 claims 13
- 238000004422 calculation algorithm Methods 0.000 abstract description 20
- 239000003086 colorant Substances 0.000 description 21
- 238000013528 artificial neural network Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 14
- 238000012545 processing Methods 0.000 description 14
- 238000012549 training Methods 0.000 description 14
- 238000013459 approach Methods 0.000 description 9
- 241001465754 Metazoa Species 0.000 description 8
- 108010084695 Pea Proteins Proteins 0.000 description 8
- 235000019702 pea protein Nutrition 0.000 description 8
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 8
- 241000196324 Embryophyta Species 0.000 description 7
- 238000013473 artificial intelligence Methods 0.000 description 7
- 238000009826 distribution Methods 0.000 description 7
- 230000004048 modification Effects 0.000 description 7
- 238000012986 modification Methods 0.000 description 7
- XOFYZVNMUHMLCC-ZPOLXVRWSA-N prednisone Chemical compound O=C1C=C[C@]2(C)[C@H]3C(=O)C[C@](C)([C@@](CC4)(O)C(=O)CO)[C@@H]4[C@@H]3CCC2=C1 XOFYZVNMUHMLCC-ZPOLXVRWSA-N 0.000 description 7
- 238000013488 ordinary least square regression Methods 0.000 description 6
- 238000007637 random forest analysis Methods 0.000 description 6
- 230000002123 temporal effect Effects 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 239000003240 coconut oil Substances 0.000 description 5
- 235000019864 coconut oil Nutrition 0.000 description 5
- 238000012937 correction Methods 0.000 description 5
- 230000006872 improvement Effects 0.000 description 5
- 238000003064 k means clustering Methods 0.000 description 5
- 238000007477 logistic regression Methods 0.000 description 5
- 230000009467 reduction Effects 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000003066 decision tree Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000009499 grossing Methods 0.000 description 4
- 235000013336 milk Nutrition 0.000 description 4
- 239000008267 milk Substances 0.000 description 4
- 210000004080 milk Anatomy 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 239000000047 product Substances 0.000 description 4
- 238000013139 quantization Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000010411 cooking Methods 0.000 description 3
- 239000013589 supplement Substances 0.000 description 3
- 235000019489 Almond oil Nutrition 0.000 description 2
- 235000010582 Pisum sativum Nutrition 0.000 description 2
- 240000004713 Pisum sativum Species 0.000 description 2
- 244000078534 Vaccinium myrtillus Species 0.000 description 2
- 235000009499 Vanilla fragrans Nutrition 0.000 description 2
- 244000263375 Vanilla tahitensis Species 0.000 description 2
- 235000012036 Vanilla tahitensis Nutrition 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 239000008168 almond oil Substances 0.000 description 2
- 238000013398 bayesian method Methods 0.000 description 2
- 235000021028 berry Nutrition 0.000 description 2
- 235000014121 butter Nutrition 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 235000014510 cooky Nutrition 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 239000000796 flavoring agent Substances 0.000 description 2
- 235000019634 flavors Nutrition 0.000 description 2
- 238000007667 floating Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 235000013372 meat Nutrition 0.000 description 2
- 239000004006 olive oil Substances 0.000 description 2
- 235000008390 olive oil Nutrition 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000010238 partial least squares regression Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 235000015108 pies Nutrition 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 150000003839 salts Chemical class 0.000 description 2
- 235000013580 sausages Nutrition 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 244000291564 Allium cepa Species 0.000 description 1
- 235000002732 Allium cepa var. cepa Nutrition 0.000 description 1
- 240000002234 Allium sativum Species 0.000 description 1
- 244000144725 Amygdalus communis Species 0.000 description 1
- 240000007087 Apium graveolens Species 0.000 description 1
- 235000015849 Apium graveolens Dulce Group Nutrition 0.000 description 1
- 235000010591 Appio Nutrition 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 235000021537 Beetroot Nutrition 0.000 description 1
- 235000002566 Capsicum Nutrition 0.000 description 1
- 235000002568 Capsicum frutescens Nutrition 0.000 description 1
- 240000008574 Capsicum frutescens Species 0.000 description 1
- 244000223760 Cinnamomum zeylanicum Species 0.000 description 1
- 244000018436 Coriandrum sativum Species 0.000 description 1
- 241001137251 Corvidae Species 0.000 description 1
- 235000009854 Cucurbita moschata Nutrition 0.000 description 1
- 240000001980 Cucurbita pepo Species 0.000 description 1
- 235000009852 Cucurbita pepo Nutrition 0.000 description 1
- 235000003392 Curcuma domestica Nutrition 0.000 description 1
- 244000008991 Curcuma longa Species 0.000 description 1
- 235000002767 Daucus carota Nutrition 0.000 description 1
- 244000000626 Daucus carota Species 0.000 description 1
- 241000238557 Decapoda Species 0.000 description 1
- 240000009088 Fragaria x ananassa Species 0.000 description 1
- 241000287828 Gallus gallus Species 0.000 description 1
- 244000068988 Glycine max Species 0.000 description 1
- 235000010469 Glycine max Nutrition 0.000 description 1
- 241000758791 Juglandaceae Species 0.000 description 1
- 240000005183 Lantana involucrata Species 0.000 description 1
- 235000013628 Lantana involucrata Nutrition 0.000 description 1
- 240000004322 Lens culinaris Species 0.000 description 1
- 235000014647 Lens culinaris subsp culinaris Nutrition 0.000 description 1
- 244000070406 Malus silvestris Species 0.000 description 1
- 235000006677 Monarda citriodora ssp. austromontana Nutrition 0.000 description 1
- 235000010676 Ocimum basilicum Nutrition 0.000 description 1
- 240000007926 Ocimum gratissimum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 241000758706 Piperaceae Species 0.000 description 1
- 240000006711 Pistacia vera Species 0.000 description 1
- 241000220324 Pyrus Species 0.000 description 1
- 240000007651 Rubus glaucus Species 0.000 description 1
- 244000061456 Solanum tuberosum Species 0.000 description 1
- 235000002595 Solanum tuberosum Nutrition 0.000 description 1
- 235000009337 Spinacia oleracea Nutrition 0.000 description 1
- 244000300264 Spinacia oleracea Species 0.000 description 1
- 229920002472 Starch Polymers 0.000 description 1
- 235000017537 Vaccinium myrtillus Nutrition 0.000 description 1
- 241000219094 Vitaceae Species 0.000 description 1
- 235000020224 almond Nutrition 0.000 description 1
- 235000021016 apples Nutrition 0.000 description 1
- 239000008122 artificial sweetener Substances 0.000 description 1
- 235000021311 artificial sweeteners Nutrition 0.000 description 1
- 235000015241 bacon Nutrition 0.000 description 1
- 238000013531 bayesian neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 235000013614 black pepper Nutrition 0.000 description 1
- 235000021029 blackberry Nutrition 0.000 description 1
- 235000012970 cakes Nutrition 0.000 description 1
- 235000014633 carbohydrates Nutrition 0.000 description 1
- 150000001720 carbohydrates Chemical class 0.000 description 1
- 235000013351 cheese Nutrition 0.000 description 1
- 235000017803 cinnamon Nutrition 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 239000002285 corn oil Substances 0.000 description 1
- 235000005687 corn oil Nutrition 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 235000003373 curcuma longa Nutrition 0.000 description 1
- 235000013365 dairy product Nutrition 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 235000021186 dishes Nutrition 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000013213 extrapolation Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 235000004611 garlic Nutrition 0.000 description 1
- 235000021021 grapes Nutrition 0.000 description 1
- 235000015220 hamburgers Nutrition 0.000 description 1
- 235000008216 herbs Nutrition 0.000 description 1
- 235000019692 hotdogs Nutrition 0.000 description 1
- 235000015243 ice cream Nutrition 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 235000021374 legumes Nutrition 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000008268 mayonnaise Substances 0.000 description 1
- 235000010746 mayonnaise Nutrition 0.000 description 1
- 235000013622 meat product Nutrition 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 235000016709 nutrition Nutrition 0.000 description 1
- 235000014571 nuts Nutrition 0.000 description 1
- 239000003921 oil Substances 0.000 description 1
- 235000019198 oils Nutrition 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 235000021017 pears Nutrition 0.000 description 1
- 235000020233 pistachio Nutrition 0.000 description 1
- 235000021135 plant-based food Nutrition 0.000 description 1
- 235000012015 potatoes Nutrition 0.000 description 1
- 239000000843 powder Substances 0.000 description 1
- 235000018102 proteins Nutrition 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 102000004169 proteins and genes Human genes 0.000 description 1
- 235000021251 pulses Nutrition 0.000 description 1
- 235000021013 raspberries Nutrition 0.000 description 1
- 235000014438 salad dressings Nutrition 0.000 description 1
- 235000014102 seafood Nutrition 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 235000014347 soups Nutrition 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 235000013599 spices Nutrition 0.000 description 1
- 235000020354 squash Nutrition 0.000 description 1
- 235000019698 starch Nutrition 0.000 description 1
- 239000008107 starch Substances 0.000 description 1
- 235000021012 strawberries Nutrition 0.000 description 1
- 238000012916 structural analysis Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 235000000346 sugar Nutrition 0.000 description 1
- 150000008163 sugars Chemical class 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 239000004753 textile Substances 0.000 description 1
- 235000013976 turmeric Nutrition 0.000 description 1
- 235000013311 vegetables Nutrition 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 235000020234 walnut Nutrition 0.000 description 1
- 235000013618 yogurt Nutrition 0.000 description 1
Images
Classifications
-
- G06N3/0454—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- A—HUMAN NECESSITIES
- A23—FOODS OR FOODSTUFFS; TREATMENT THEREOF, NOT COVERED BY OTHER CLASSES
- A23L—FOODS, FOODSTUFFS, OR NON-ALCOHOLIC BEVERAGES, NOT COVERED BY SUBCLASSES A21D OR A23B-A23J; THEIR PREPARATION OR TREATMENT, e.g. COOKING, MODIFICATION OF NUTRITIVE QUALITIES, PHYSICAL TREATMENT; PRESERVATION OF FOODS OR FOODSTUFFS, IN GENERAL
- A23L5/00—Preparation or treatment of foods or foodstuffs, in general; Food or foodstuffs obtained thereby; Materials therefor
- A23L5/40—Colouring or decolouring of foods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
Definitions
- the disclosure generally relates to food science and artificial intelligence, in particular, use of machine learning to predict color of a food item and/or recommend changes to achieve a target color.
- FIG. 1 illustrates a system comprising a color predictor configured to predict the color of a food item given its formula in certain embodiments
- FIG. 2 illustrates a block diagram of the color predictor in certain embodiments
- FIG. 3 illustrates an example encoding vector generated using the multi-warm encoder for an example given formula according to certain embodiments
- FIG. 4 illustrates an embodiment of a system which can utilize an ingredient embedder to incorporate unsupervised data for training an embedding layer, in certain embodiments
- FIG. 5 illustrates an embodiment of a system comprising a color recommender configured to provide a recommendation for changes in the given formula for a food item to achieve the desired color of the food item;
- FIG. 6 illustrates a block diagram that can be used to describe the loss function model implemented by the color recommender, according to certain embodiments
- FIG. 7 illustrates a computer-implemented method to determine color of a food item according to certain embodiments.
- FIG. 8 illustrates a computer-implemented method to provide recommendation to achieve target color attributes for a food item according to certain embodiments.
- color of a food item prepared using a set of ingredients may not be as expected, or a chef or another entity may desire to improve or change the color of the food item without starting from scratch. In some other instances, it may be desirable to predict the color of the food item before it's prepared or cooked so that the recipe can be altered or not cooked at all. Certain embodiments can predict the color of a food item given a formula for its recipe by utilizing data science, color science, food science and machine learning algorithms. The formula may include a list of ingredients and their respective quantities. Certain embodiments can also recommend changes in the given formula to achieve a desired color with minimum modifications to the recipe.
- the food item can include plant-based ingredients, animal-based ingredients, synthetic ingredients, and/or a combination thereof.
- the food item can be a plant-based food item that can mimic animal-based foods from the sensory (e.g., flavor and/or texture) and/or visual perspectives.
- a number of products e.g., plant-based ingredients, etc.
- are available in the market that can provide substitutes for animal-based food e.g., animal-based food such as chicken, meat patties, milk, etc.
- a combination of plant-based ingredients can be cooked using a certain recipe to taste, look and/or feel like sausage.
- Certain embodiments may approach the color prediction of the food item as a regression problem.
- a color predictor can use machine learning algorithms to predict the color in a certain color space given a list of strings representing the ingredients of the food item. In some instances, the prediction can be used to determine whether the recipe for the food item needs to be altered or not needed at all even before the food item is cooked or prepared.
- the color predictor can additionally or alternatively be used as a component in a color recommender to provide a recommendation for changes in the formula for the recipe to improve the color of the food item and/or to achieve a target color.
- the color recommender may additionally or alternatively implement a loss function to optimize the changes in the given formula by considering attribute distance, ingredient distance, ingredient likelihood, ingredient sparsity, and/or other factors. In a specific example, the color recommender can be based on all of attribute distance, ingredient distance, ingredient likelihood, and ingredient sparsity.
- FIG. 1 illustrates a system 100 comprising a color predictor 102 configured to predict the color of a food item given its formula (e.g., based on its formula, etc.).
- the color predictor 102 may receive an initial formula 104 of a recipe for the food item and provide color attributes 106 of the predicted color.
- the color attributes 106 may represent the color in a color space with floating-point values.
- the color predictor 102 may determine that the color of the food item is a particular color (e.g., brown, white, yellow, etc.) based on a color palette or a color scheme.
- the initial formula 104 of the recipe for the food item preferably includes two or more ingredients and their respective quantities. Additionally or alternatively, the food item may include any suitable number of ingredients and/or their respective quantities.
- the recipe may include plant-based ingredients, animal-based ingredients, water-based ingredients, synthetic ingredients, and/or a combination thereof.
- the plant-based ingredients may include vegetables (e.g., onions, potatoes, peas, garlic, spinach, carrots, celery, squash, etc.), fruit (e.g., apples, pears, grapes, etc.), herbs (e.g., oregano, cilantro, basil, etc.), spices (black peppers, turmeric, red chili peppers, cinnamon, etc.), oils (e.g., corn oil, olive oil, almond oil), nuts (e.g., almonds, walnuts, pistachios, etc.), legumes (e.g., lentils, dried peas, soybeans, pulses, etc.), starch, proteins, fibers, carbohydrates, sugars, and/or other suitable plant-based ingredients, etc.
- vegetables e.g., onions, potatoes, peas, garlic, spinach, carrots, celery, squash, etc.
- fruit e.g., apples, pears, grapes, etc.
- herbs e.g., oregano, cilantro, basil, etc.
- spices
- animal-based ingredients may include dairy products (e.g., milk, butter, cheese, yogurt, ice cream, etc.), egg-based products (e.g., mayonnaise, salad dressings, etc.), meat products (e.g., burger patties, sausages, hot dogs, bacon, etc.), seafood (e.g., fish, crab, prawns, etc.), and/or other suitable animal-based ingredients, etc.
- Synthetic ingredients may include artificially produced food, e.g., artificial meats, artificial sweeteners, artificial milk, and/or other suitable synthetic ingredients, etc.
- the initial formula 104 may include a quantity 1 (e.g., a first quantity, etc.) for an ingredient a (e.g., a first ingredient, etc.), a quantity 2 (e.g., a second quantity, etc.) for an ingredient b (e.g., a second ingredient, etc.), and a quantity n (e.g., an nth quantity, etc.) for an ingredient m (e.g., an mth ingredient, etc.).
- the quantities may be represented using percentages, fractions, units, and/or other suitable value types and/or methods.
- the initial formula 104 may include 47.2% water, 20.6% pea protein, and 15.7% coconut oil, hence the ingredient a (e.g., first ingredient, etc.) is “water”, ingredient b (e.g., second ingredient, etc.) is “pea protein”, ingredient m (e.g., mth ingredient, etc.) is “coconut oil”, quantity 1 (e.g., first quantity, etc.) is “47.2”, quantity 2 (e.g., second quantity, etc.) is “20.6” and quantity n (e.g., nth quantity, etc.) is “15.7”.
- ingredient a e.g., first ingredient, etc.
- ingredient b e.g., second ingredient, etc.
- pea protein e.g., pea protein
- ingredient m e.g., mth ingredient, etc.
- quantity 1 e.g., first quantity, etc.
- quantity 2 e.g., second quantity, etc.
- quantity n e.
- the color predictor 102 may be configured to perform the color prediction using a prediction model based on neural networks, regression models, classification models, and/or other suitable machine learning algorithms.
- color predictor(s) 102 and/or other suitable models, suitable components of embodiments, and/or suitable portions of embodiments of methods described herein can include, apply, employ, perform, use, be based on, and/or otherwise be associated with artificial intelligence approaches (e.g., machine learning approaches, etc.) including any one or more of: supervised learning (e.g., using gradient boosting trees, using logistic regression, using back propagation neural networks, using random forests, decision trees, etc.), unsupervised learning (e.g., using an Apriori algorithm, using K-means clustering), semi-supervised learning, a deep learning algorithm (e.g., neural networks, a restricted Boltzmann machine, a deep belief network method, a convolutional neural network method, a recurrent neural network method, stacked auto-encoder method, etc.), reinforcement learning (e.g., using a Q-learning algorithm, using temporal difference learning), a regression algorithm (e.g., ordinary least squares, logistic regression, stepwise regression, multi
- the ingredients a-m may be represented as a list of strings which can be converted to a multi-dimensional embedding vector to represent the ingredients in a meaningful space.
- “word2vec”, “Glove”, and/or another suitable ingredients embedding model may be used to produce the embedding vector.
- a vectorized version of a loss function e.g., CIEDE2000
- the ingredients embedding model can be trained by utilizing supervised data, unsupervised data and/or a combination thereof.
- the ingredients embedding model may additionally or alternatively be used to predict the probability of each ingredient to be in a recipe.
- the ingredients can be represented in any suitable form.
- the color attributes 106 produced by the color predictor 102 may be represented as a vector belonging to a certain color space. Alternatively, the color attributes 106 can be represented as a non-vector and/or in any suitable representation. In examples, the color space may correspond to CIELAB, RGB (red, green, blue), inverted RBG, HSV (hue, saturation, value), CMYK, and/or another suitable color space. In certain implementations, the color predictor 102 may use the images associated with the food item and identify a most prominent or strongest color in the food item for representing in the color attributes 106 belonging to the specific color space. In certain embodiments, the color predictor 102 is further explained with reference to FIG. 2 .
- FIG. 2 illustrates a block diagram of the color predictor 102 in certain embodiments.
- the color predictor 102 may include an ingredient parser 202 , a multi-warm encoder 204 , an ingredients embedder 206 , an image color extractor 208 , a correction process 210 , a color encoder 212 , a loss function model 214 , and a prediction model 216 . Additionally or alternatively, the color predictor 102 can include any suitable combination of the above.
- Raw recipe data 218 may be part of a supervised dataset, and/or an unsupervised dataset.
- the supervised dataset may include input data and output data that may be labeled to be used as a training dataset (e.g., a first training dataset) for future data processing.
- the supervised dataset may include data for a recipe for a food item as an input and a color attribute of the food item as the output.
- the unsupervised dataset may include only input data without any corresponding output data.
- any suitable data types can be used for the training dataset.
- the raw recipe data 218 may store data associated with multiple recipes of various food items (e.g., one or more recipes of one or more food items; in a 1:1 recipe to food item relationship and/or any suitable numerical relationship between recipes and food items, etc.).
- the raw recipe data 218 may store ingredients as a list of tuples of the ingredient name (e.g., string) and its proportion (e.g., percentage or quota).
- the ingredients may include [ ⁇ “name”: “water”, “quota”: 47.197640117994105 ⁇ , ⁇ “name”: “pea protein (textured)”, “quota”: 20.64896755162242 ⁇ , ⁇ “name”: “coconut oil (without flavor)”, “quota”: 15.732546705998034 ⁇ , . . . ].
- the raw recipe data 218 may also store recipe as a list of step strings, e.g., [ ⁇ “step”: 1 , “description”: “Weigh 20.65 g of pea protein (textured), 1.57 g of salt and 0.10 g of beetroot powder in a bowl” ⁇ , . . . ].
- the raw recipe data 218 may also store observed colors as a list of strings and one or more images of each food item.
- the raw recipe data 218 may utilize JSON format and/or another suitable format to store the information.
- the raw recipe data 218 may be stored in memory, e.g., RAM, EEPROM, flash memory, hard disk drives, optical disc drives, solid state memory, and/or any type of memory suitable for data storage.
- the ingredient parser 202 may be configured to parse the raw recipe data 218 .
- the ingredient parser 202 may process the ingredient strings in the raw recipe data 218 to remove any duplicate ingredients, remove parentheticals and post-comma descriptors, collapse double spaces, remove quantities using list of quantity words, filter overly-long ingredients and match to clean reference ingredients when possible, and/or any other text processing and/or other suitable processing.
- “organic wild berries (organic blackberries, organic strawberries, organic bilberries, organic raspberries)” may be processed to “berries.”
- “1 ⁇ 2 (12 ounce) box vanilla wafer cookies (such as Nilla®), crushed very fine” may be processed to “vanilla wafer cookies.” Processing can be automatic, computational, manual, and/or other suitable types of processing.
- the multi-warm encoder 204 may be configured to represent the formula as a sparse proportional vector.
- the multi-warm encoder 204 functions to generate (e.g., output, etc.) one or more multi-warm vectors 300 , such as based on one or more formulas 302 and/or other suitable inputs.
- Each dimension of the sparse proportional vector may correspond to a unique ingredient, and the value in each dimension can be the proportion of that ingredient present in the food item.
- the proportions are scaled to sum to 1 instead of 100, but any suitable type of scaling can be employed.
- the multi-warm encoded vector may include entries that are “warm” (e.g., floating point) instead of “hot” (e.g., one) with most dimensions being zeros and few dimensions being non-zero.
- warm e.g., floating point
- hot e.g., one
- the formula can be represented in any suitable manner.
- FIG. 3 shows an example multi-warm vector 300 generated using the multi-warm encoder 204 for a given formula 302 .
- the formula 302 may be similar to the initial formula 104 in FIG. 1 .
- the formula 302 may include “water: 47.2%”, “pea protein: 20.6%”, and “coconut oil: 15.7%” among other ingredients.
- the multi-warm encoder 204 may encode the formula 302 to produce a sparse proportional vector comprising [0, 0, 0, . . . , 0, 0.472, 0, . . . , 0, 0.206, 0, . . . , 0, 0.157, 0, . . . ] as represented by the multi-warm vector 300 .
- the multi-warm encoding may be used to perform a weighted sum of embeddings. Since embeddings are generally trained without quantities, the multi-warm values can be scaled using each ingredient's distribution.
- the ingredients embedder 206 may be configured to convert the ingredient strings into multi-dimensional embedding vectors that may contain pertinent information about the ingredients.
- the ingredient embeddings may be generated by training an embedding layer using a model similar to an autoencoder for unsupervised feature learning.
- the autoencoder is a type of artificial neural network which can be used for dimensionality reduction by copying the most relevant aspects of its input to its output. Given a set of ingredient strings, for each ingredient, the embedding layer may provide a likelihood of also being in a recipe.
- the ingredients embedder 206 may be used to identify ingredients from an unsupervised dataset that may be similar to a first set of ingredients used in the supervised dataset, e.g., represented by the raw recipe data 218 in FIG. 2 .
- use of the ingredients embedder 206 may be beneficial to incorporate unsupervised data and domain knowledge to supplement a limited amount of supervised data. In certain embodiments, this is further explained with reference to FIG. 4 .
- FIG. 4 illustrates a system 400 which can utilize the ingredient embedder 206 to incorporate unsupervised data (e.g., as a second training dataset, etc.) for training an embedding layer, in certain embodiments.
- unsupervised data e.g., as a second training dataset, etc.
- ingredients 402 may include a second set of ingredients belonging to an unsupervised dataset.
- the ingredients embedder 206 may be used to convert the ingredient strings from the ingredients 402 into multi-dimensional embedding vectors 404 to represent the ingredients in a meaningful space.
- functionality of the ingredients embedder 206 may be adapted from the word2vec (e.g., skip-gram, continuous bag-of-words (CBOW)) and/or similar model, but any suitable approaches can additionally or alternatively be used.
- a respective embedding vector for each ingredient in the formula may be fetched from the ingredients embedder 206 to provide embedded vectors 404 .
- An adder 406 may be used to generate an embedded formula 408 by performing the weighted sum of the embedded vectors 404 using their proportions in the formula. Since the embeddings can be trained without quantities in specific examples, the multi-warm vector values can be scaled using each ingredient's distribution. In some implementations, this may be performed using a dot product with an embedding matrix.
- the embedded formula vector 408 may have a different (e.g., much smaller) dimension, may be dense, and/or may contain negative values.
- the embedding layer 410 may be implemented using a linear neural network layer and/or another suitable model (e.g., a different type of neural network, any suitable models described herein, etc.).
- the embedding layer 410 may be trained to incorporate unsupervised data and/or domain knowledge (e.g., a second training set) to supplement the limited amount of supervised data (e.g., a first training set) represented in the raw recipe data 218 .
- the unsupervised data may belong to a large set of recipes obtained from the web, generated automatically or manually, and/or otherwise procured.
- leveraging the embedding layer 410 can confer improvements in computer function through facilitating improvements in accuracy of predicting color of a food item and/or recommending changes to achieve a target color for a food item.
- the activation function 412 may be used to generate a vector 414 with a probability distribution of the ingredients, which may represent the likelihood of also being in a given recipe.
- “Softmax” or a similar activation function may be used to output the vector 414 .
- the probability predictions in the vector 414 can add up to one.
- the vector 414 may include “milk: 0.03”, “butter: 0.76” and “olive oil: 0.21.”
- the probability predictions generated by the system 400 may be useful to utilize generally large amounts of unsupervised data for supplementing the relatively smaller amount of supervised data for predicting the color(s) of the given food item(s).
- the images database 220 may include images in the form of pixel values (and/or other suitable form) for each food item associated with the raw recipe data 218 .
- the images database 220 and the raw recipe data 218 may be part of the same dataset.
- color of each food item can be extracted from the corresponding images. For example, in certain instances, a label for the color may be too broad (e.g., brown), multiple colors may be observed for a food item, and/or the label may include a non-color word.
- the image color extractor 208 in FIG. 2 may be configured to extract the color from the image of each food item stored in the images database 220 .
- RGB color space is discussed for color representation with reference to the image color extractor 208 , however, it will be understood that the colors may be represented using any suitable color space without deviating from the scope of the disclosure. Additionally or alternatively, the image color extractor 208 can leverage any suitable approaches in extract colors from the image(s) of each food item.
- some of the food items may have multiple separate colors (e.g., a pie's crust and filling, an empanada's outside color and inside color, a cake and its toppings, and/or food items with irregularly spread out ingredients).
- preparation process in addition to the raw ingredients (and/or in addition to any suitable types of data described herein, etc.) may be used to accurately predict the color of the image.
- the preparation process can affect the coloration of homogeneous foods through processes such as baking.
- the correction process 210 may be used to correct any errors for the color extraction process.
- the images may be inspected manually and/or by automated means for correction. In instances, where there are multiple images for a food item, the most relevant image may be used. In some instances, raw recipe data for the food items without any images, or with multiple images with highly varying colors may be discarded. Any suitable image processing techniques can be applied for processing the images into a form suitable for use by the image color extractor 208 .
- Image processing techniques can include any one or more of: image filtering, image transformations, histograms, structural analysis, shape analysis, object tracking, motion analysis, feature detection, object detection, stitching, thresholding, image adjustments, mirroring, rotating, smoothing, contrast reduction, and/or any other suitable image processing techniques.
- the color encoder 212 may be configured to encode the colors of the food item for representing in a color space.
- the colors to be encoded may have been extracted using the image color extractor 208 or provided in the raw recipe data 218 for the respective food item.
- the encoding may be based on RGB space, inverted RGB space, HSV space, CMYK space, CIE L*a*b* (or CIELAB or Lab) and/or another suitable space.
- representation may be more or less accurate based on the regression model (and/or other suitable model) used for learning, lighting of the images, and/or other factors.
- CIELAB may provide better color representation as it is designed around human perception of color rather than computer display of the image.
- CIELAB color space is a standard defined by International Commission on Illumination (CIE). It can express color as three values: L* for the lightness from black (0) to white (100), a* from green ( ⁇ ) to red (+), and b* from blue ( ⁇ ) to yellow (+). In some instances, CIELAB color space may provide better results for implementing a loss function (e.g., delta E metric) for the color difference. However, any suitable color space can be used.
- a loss function e.g., delta E metric
- the loss function 214 may be configured to improve the prediction of visually perceived color differences through the introduction of various corrections to the color represented in a particular color space.
- the loss function 214 may implement the CIEDE2000 delta E metric to describe the distance between two colors represented in the CIELAB color space.
- the CIELAB color space may be designed so that its Euclidean distance is delta E.
- the CIEDE2000 delta E metric may be designed so that its unit distance is the “just noticeable difference” between the two colors.
- the “just noticeable difference” of the delta E can be usually 1, e.g., if two colors have delta E less than 1 it is unperceivable and larger than 1 it is perceivable.
- CIEDE2000 can be designed to deconstruct the color into Lightness, Chroma and Hue (LCH), and to compensate for nonlinearities in LCH and colors near blue.
- CIEDE2000 can also include weighting factors for L, C, H dependent on the application (e.g., cars, textiles, computer graphics, etc.).
- the prediction model 216 may be configured to take into account feedback from the loss function 214 and predict the color attributes 106 of the given food item.
- the color attributes 106 may be represented in the CIELAB space, e.g., L*a*b*.
- the CIELAB space can be represented in a three-dimensional space using a three-dimensional model.
- the three coordinates of CIELAB may represent the lightness of the color, its position between red/magenta and green, and its position between yellow and blue. Generally, same amount of numerical changes in the L*a*b* values correspond to roughly the same amount of visually perceived change.
- the color attributes 106 may be represented in any suitable color space (e.g., RGB, inverted RGB, HSV, etc.) without deviating from the scope of the disclosure.
- the prediction model 216 may further determine that the color of the food item is a particular color from a color palette or color scheme.
- the prediction model 216 may utilize a regression model, e.g., a type of neural network and/or ordinary least squares (OLS) and/or other suitable models, based on a number of factors.
- a regression model e.g., a type of neural network and/or ordinary least squares (OLS) and/or other suitable models, based on a number of factors.
- the effect of ingredients on color is generally not linear, e.g., cooking can change the color of an ingredient.
- certain ingredients can dominate the color.
- a forward neural network may perform better than the OLS model in certain embodiments.
- a configuration of the neural network may include multiple dense layers with leaky ReLU activation and a final output layer with linear activation with embeddings for the input. Both the input data and the output data may be scaled for mean 0 and variance 1.
- prediction models 216 and/or other suitable models, suitable components of embodiments, and/or suitable portions of embodiments of methods described herein can include, apply, employ, perform, use, be based on, and/or otherwise be associated with artificial intelligence approaches (e.g., machine learning approaches, etc.) including any one or more of: supervised learning (e.g., using gradient boosting trees, using logistic regression, using back propagation neural networks, using random forests, decision trees, etc.), unsupervised learning (e.g., using an Apriori algorithm, using K-means clustering), semi-supervised learning, a deep learning algorithm (e.g., neural networks, a restricted Boltzmann machine, a deep belief network method, a convolutional neural network method, a recurrent neural network method, stacked auto-encoder method, etc.), reinforcement learning (e.g., using a Q-learning algorithm, using temporal difference learning), a regression algorithm (e.g., ordinary least squares, logistic regression, stepwise regression, multivariate adaptive regression
- the color predictor 102 may utilize only the ingredients of the food item to predict the color since data from the raw recipe data 218 may be limited.
- the recipe information can be unstructured, and there can be insufficient data available without structure. Predicting the color of a food item may not be a deterministic problem due to various factors—multiple dishes with different colors can be made from the same ingredients, many food items may have more than one color (e.g., cakes, pies, soups), limited size of the training dataset, and/or other suitable factors. Thus, in such cases and/or in suitable scenarios, predicting a primary color of the food item with the recipe information may provide better results.
- the color predictor 102 may additionally or alternatively be used as a component in a system that may utilize a color recommender to provide a recommendation for changes in the formula for the recipe to improve its color to match the desired color. In certain embodiments, this is further discussed with reference to FIG. 5 .
- FIG. 5 illustrates an embodiment of a system 500 comprising a color recommender 502 for providing (e.g., configured to provide, etc.) one or more recommendations for changes in one or more initial formulas 104 to achieve the desired color of the food item given target color attributes 504 .
- a color recommender 502 for providing (e.g., configured to provide, etc.) one or more recommendations for changes in one or more initial formulas 104 to achieve the desired color of the food item given target color attributes 504 .
- the color recommender 502 may be based on stochastic gradient descent (SGD) and back propagation models.
- the color recommender 502 may utilize the color predictor 102 to predict the color of a food item given the initial formula 104 .
- the color recommender 502 can determine one or more recommendations for one or more changes (e.g., modifications, etc.) to one or more initial formulas 104 , based on output(s) of the color predictor 102 .
- the color recommender 502 may additionally or alternatively implement a loss function 510 to optimize the initial formula 104 to achieve the desired color.
- the color recommender 502 may provide (e.g., output, determine, etc.) a recommended formula 506 and/or new color attributes 508 corresponding to the modification in the initial formula 104 to achieve the target color attributes 504 .
- the target color attributes 504 and the new color attributes 508 may belong to the same color space as the color attributes 106 of FIG. 1 , e.g., CIELAB, RGB, or another color space.
- the target color attributes 504 , the new color attributes 508 , and the color attributes 106 can belong to different color spaces, and/or any suitable combination of attributes can belong to any suitable combination of color spaces.
- the initial formula 104 may include 47.2% water, 20.6% pea protein, and 15.7% coconut oil;
- the target color attributes 504 may include ⁇ R: 102, G: 23, B: 214 ⁇ .
- the loss function 510 may take into consideration multiple factors while determining the changes in the initial formula 104 , e.g., attribute distance (e.g., how close is the initial formula 104 to the target color attributes 504 ), ingredient distance (e.g., how close is the recommended formula 506 to the initial formula 104 since it is desired to produce the same or similar type of food item for which the initial formula 104 was provided, and in specific examples to have minimal changes to the initial formula 104 , thus making the cooking process similar and/or easier such as for the chefs, etc.), ingredient likelihood (e.g., how sensible the new formula is, for example, using 80% salt in the recipe may not be typical), and/or ingredient sparsity (e.g., how many ingredients are in the recommended formula 506 , for example, it is not typical to have just a pinch of 300 ingredients).
- attribute distance e.g., how close is the initial formula 104 to the target color attributes 504
- ingredient distance e.g., how close is the recommended formula 506 to the initial
- FIG. 6 illustrates a block diagram 600 that can be used to describe the loss function model implemented by the color recommender 502 in certain embodiments.
- the color recommender 502 may be designed to optimize inputs to the color predictor 102 using the loss function model described in FIG. 6 .
- the color recommender 502 may operate on raw parameters 602 for a given formula of a recipe to implement the loss functions.
- the raw parameters 602 may include ingredients in the initial formula 104 , e.g., ingredient a, ingredient b, . . . , ingredient m.
- the raw parameters 602 may be optimized using a sparsity loss function 604 .
- the raw parameters 602 may be further normalized using a normalization layer 606 , such as before being operated by other loss functions, e.g., likelihood loss function 610 , attribute distance loss function 614 , and/or ingredient distance loss function 618 , etc., but the normalization layer 606 can be used at any suitable time.
- the sparsity loss function 604 may be implemented to bring the ingredients in the raw parameters 602 to zero unless they significantly improve the attribute loss.
- the attribute gradient may be balanced with the sparsity gradient.
- the slope near zero may be used to determine whether or not to sparse.
- the sparsity loss function 604 may also implement a step sparsity loss function, a magnitude sparsity loss function, an L1 norm, among others to minimize the color difference.
- the sparsity loss function 604 can implement any suitable combination of loss functions.
- the normalization layer 606 may be used to normalize the raw parameters 602 for other loss metrics and to avoid interfering with the sparsity loss function 604 .
- raw parameters may be projected according to non-negativity and multi-warm constraints, and constrained optimization may be performed.
- the normalized ingredients may be encoded to generate a multi-warm vector using a multi-warm encoder 608 .
- the multi-warm encoder 608 may be similar to the multi-warm encoder 204 and the multi-warm vector may be similar to the multi-warm vector 300 shown in FIG. 3 .
- the color recommender 502 may be designed to optimize the ingredients in the multi-warm vector, which can feed to the color predictor 102 .
- Likelihood loss function 610 may be used to determine whether a given formula is a real or a typical formula. Likelihood may be defined as the probability of the given formula under the distribution of formulas, which may be proportional to the product of probability density function (PDF) values for all the ingredients. Likelihood loss function 610 may be operationalized as log-likelihood for numerical stability. Gradient for ingredients, which are not in the recipe, can be zeroed out below their minimum. The PDF can be infinitesimal but actual likelihood may not be.
- PDF probability density function
- likelihood ⁇ ( x ) ⁇ f X ⁇ ( x i ) x i ⁇ minval i 0 x i ⁇ minval i ⁇ . Equation ⁇ ⁇ ( 2 )
- Optimizing the likelihood may not always be preferable since it can push towards an irrelevant mode.
- the likelihood can be clipped above a threshold while avoiding any modification of the good recipes.
- the quantities are less likely than the original recipe.
- further improvements are possible by using quantile threshold for all ingredients, only modifying unlikely quantities similar to the original, or implementing inverse cumulative distribution function (CDF) instead of log-likelihood, among others.
- CDF inverse cumulative distribution function
- Ingredient distance loss function 618 may be implemented to determine how close the recommended formula 506 is to the initial formula 104 . In some examples, it may only include gradient for ingredients that were in the original recipe. In examples, since new ingredients may change the percentages of original ingredients, just the selected ingredients may be normalized. In examples, this may result in those ingredients to be a smaller part of the overall recipe; however, it may be mitigated by the likelihood component.
- logit-normal distribution may be used to calculate ingredient distance in the logit space.
- the logit (inverse sigmoid) function is generally linear near 0.5, can stretch small values, and is bounded in [0, 1].
- An example logit function is shown in Equation (3), where p is probability:
- Logit-normal distribution generally assumes that the logit of the data is normally distributed.
- the parameters can be ⁇ and ⁇ of the logits, as shown in Equation (4). It may generally be used for modeling variables which are proportions bounded in [0, 1], where 0 and 1 may never occur.
- recipe score function 616 may be used to determine L1 distance in nutritional/chemical/physical space.
- the recipe score function may be based on a difference between the initial formula 104 and the recommended formula 504 .
- the score can be determined in any suitable manner within the scope of the disclosure.
- Attribute predictor 612 may be used to determine new color attributes 508 corresponding to the recommended formula 506 .
- the attribute predictor 612 may utilize the color predictor 102 or functionality of the color predictor 102 to determine the new color attributes 508 based on the attribute distance (e.g., how close is the initial formula 104 to the target color attributes 504 ).
- the new color attributes 508 may be similar to the color attributes 106 .
- the new color attributes 508 may belong to the same or different color space than the target color attributes.
- the recommended formula 506 may include 24.5% water, 30.7% pea protein, and 10.2% almond oil, and the new color attributes 508 may include ⁇ R: 112, G: 15, B: 210 ⁇ .
- the new color attributes 508 may correspond to the recommended formula 506 , which may provide optimal modification to the initial formula 104 .
- Attribute distance loss function 614 may be used to determine how close the formula is to the target color attributes 504 based on the attribute predictions made by the attribute predictor 612 .
- mean squared error (MSE) in the LAB space can be taken into account. This may be extensible to other attributes (e.g., smell, taste) so long as there is a differentiable model to predict them given a recipe.
- FIG. 7 illustrates an embodiments of a computer-implemented method 700 to determine color of a food item according to certain embodiments.
- the method 700 may be executed by the color predictor 102 in FIG. 1 or in FIG. 5 , and/or by any suitable combination of components described herein.
- a formula for a recipe of the food item may be received.
- the formula may comprise a list of ingredients and its respective quantities in the recipe.
- the food item may include plant-based ingredients, animal-based ingredients, synthetic ingredients, and/or other suitable types of ingredients.
- the formula may be the initial formula 104 received by the color predictor 102 .
- the formula may comprise ingredients a-m and its respective quantities 1-n.
- the ingredient a may be water and the quantity 1 may be 47.2%
- the ingredient b may be pea protein and the quantity 2 may be 20.6%, and so on.
- the formula may be part of the raw recipe data 218 in FIG. 2 .
- the list of ingredients may be encoded to represent as an embedding vector in a color space.
- the list of ingredients may be encoded by the multi-warm encoder 204 or another suitable encoder.
- the list of ingredients may be parsed by the ingredient parser 202 before feeding to the multi-warm encoder 204 .
- color attributes associated with the food item can be predicted using a color predictor.
- the color attributes may belong to a color space.
- the color predictor 102 may be used to predict the color attributes 106 associated with the food item as discussed with reference to FIG. 2 .
- the color attributes 106 may belong to the CIELAB color space and may include floating point values.
- the color predictor may have been trained using supervised dataset and/or an unsupervised dataset.
- the raw recipe data 218 and the images extracted from the images database 220 may have been used to train the prediction model 216 in the color predictor 102 .
- the method 700 may also include encoding the list of ingredients to generate a multi-warm vector.
- the list of ingredients can be encoded by the multi-warm encoder 204 to generate a multi-warm vector comprising a respective dimension for each ingredient in the list of ingredients.
- the list of ingredients may be parsed by the ingredient parser 202 before feeding to the multi-warm encoder 204 .
- the method 700 may further include generating the ingredient embedding vector 404 using the ingredient embedder 206 given the multi-warm vector 300 .
- the method 700 may further include extracting color information associated with the food item for representing in the color space.
- the color information may be extracted using the image color extractor 208 and the corresponding colors may be encoded using the color encoder 212 for representing in the color space.
- the method 700 may further include training the color predictor given the ingredient embedding vector and the color information.
- the color predictor 102 may utilize CIEDE2000 delta E metric as a loss function for improving the prediction of the color.
- the color of the food item can be determined based on the color attributes.
- the prediction model 216 may determine the color of the food item based on the color space the color attributes 106 may belong to.
- the prediction model 216 may determine that the color of the food item is a particular color from a color palette or color scheme.
- FIG. 8 illustrates an embodiments of a computer-implemented method 800 to provide recommendation to achieve target color attributes for a food item according to certain embodiments.
- the method 800 may be executed by the color recommender 504 in FIG. 5 .
- target color attributes are received to achieve a target color of a food item given its formula.
- the target color may correspond to the target color attributes.
- the target color attributes 502 may be received by the color recommender 502 .
- the target color attributes 502 may be part of the raw parameters 602 in FIG. 6 .
- the target color attributes may belong to a color space, e.g., CIELAB, RGB, inverted RGB, etc.
- step 804 using a color recommender, changes to the formula can be determined to achieve the target color.
- the color recommender 502 may determine the recommended formula 506 as discussed with reference to FIG. 5 and FIG. 6 .
- the changes in the formula may have been determined based on the SGD and back propagation models as well as one or more loss functions.
- the loss functions may include the sparsity loss function 604 , the likelihood loss function 610 , the attribute distance loss function 614 , and the ingredient distance loss function 618 to take the ingredient sparsity, ingredient likelihood, attribute distance, and the ingredient distance respectively into consideration.
- step 806 new color attributes corresponding to the recommended formula are provided.
- the color recommender 502 may provide the new color attributes 508 corresponding to the recommended formula 506 using the attribute predictor 612 .
- the attribute predictor 612 may be similar to the color predictor 102 or include the functionality of the color predictor 102 , and consider the attribute distance loss function 614 in determining the new color attributes 508 .
- the disclosed embodiments can utilize various machine learning algorithms and/or other suitable models to help minimize cooking time and resources by identifying even before a food item is cooked or prepared whether a recipe for the food item needs to be altered or does not need to be cooked at all, based on the color prediction. Additionally, the color recommender can use the color predictor to provide a recommendation for changes in the formula for the recipe to improve the color of the food item or to achieve a target color.
- Certain embodiments can confer improvements in computer-related technology (e.g., artificial intelligence, machine learning, neural networks, etc.) by facilitating computer performance of functions not previously performable and/or not practically performed by the human mind.
- the technology can leverage one or more computational machine learning-associated training sets for training one or more artificial intelligence models (e.g., color predictor models; color recommender models; etc.) in a non-generic, application-specific fashion for predicting color of a food item and/or recommending changes to achieve a target color of a food item.
- artificial intelligence models e.g., color predictor models; color recommender models; etc.
- Certain embodiments can confer improvements in functioning of computing systems themselves through components (e.g., components of embodiments of systems described herein, etc.) and/or processes (e.g., processes of embodiments of methods described herein, etc.) for improving computational accuracy in computationally predicting color of a food item and/or computationally recommending changes to achieve a target color of a food item.
- components e.g., components of embodiments of systems described herein, etc.
- processes e.g., processes of embodiments of methods described herein, etc.
- Embodiments of the systems and/or methods can include components for and/or processes including generating one or more food items including one or more colors (e.g., food items with the one or more colors) determined based on recommendations from one or more color recommender models and/or color prediction outputs from one or more predictor models.
- the method can include determining a set of recommended changes (e.g., to a food item formula) to achieve one or more colors for a food item; and generating the food items including the one or more colors (and/or including similar colors or derivable colors) based on the set of recommended changes (e.g., implementing one or more portions of the recommended changes; implementing modifications derived based on the recommended changes; etc.).
- Embodiments of the methods can include any suitable processes and/or functionality described in relation to the figures, system, and/or described herein.
- Models described herein can be run or updated: once; at a predetermined frequency; every time a certain process is performed; every time a trigger condition is satisfied and/or at any other suitable time and frequency. Models can be run or updated concurrently with one or more other models, serially, at varying frequencies, and/or at any other suitable time. Each model can be validated, verified, reinforced, calibrated, or otherwise updated based on newly received, up-to-date data; historical data or be updated based on any other suitable data.
- Portions of embodiments of methods and/or systems described herein are preferably performed by a first party but can additionally or alternatively be performed by one or more third parties, users, and/or any suitable entities.
- data described herein can be associated with any suitable temporal indicators (e.g., seconds, minutes, hours, days, weeks, time periods, time points, timestamps, etc.) including one or more: temporal indicators indicating when the data was collected, determined (e.g., output by a model described herein), transmitted, received, and/or otherwise processed; temporal indicators providing context to content described by the data; changes in temporal indicators (e.g., data over time; change in data; data patterns; data trends; data extrapolation and/or other prediction; etc.); and/or any other suitable indicators related to time.
- temporal indicators e.g., seconds, minutes, hours, days, weeks, time periods, time points, timestamps, etc.
- temporal indicators indicating when the data was collected, determined (e.g., output by a model described herein), transmitted, received, and/or otherwise processed
- temporal indicators providing context to content described by the data
- changes in temporal indicators e.g., data over time; change in data; data patterns
- parameters, metrics, inputs e.g., formulas, ingredient attributes, other suitable features, etc.
- outputs e.g., color attributes, recommended formulas, etc.
- suitable data can be associated with value types including any one or more of: scores, text values (e.g., ingredient descriptors, etc.), numerical values (e.g., color attributes, etc.), binary values, classifications, confidence levels, identifiers, values along a spectrum, and/or any other suitable types of values.
- Any suitable types of data described herein can be used as inputs (e.g., for different models described herein; for components of a system; etc.), generated as outputs (e.g., of models; of components of a system; etc.), and/or manipulated in any suitable manner for any suitable components.
- suitable portions of embodiments of methods and/or systems described herein can include, apply, employ, perform, use, be based on, and/or otherwise be associated with one or more processing operations including any one or more of: extracting features, performing pattern recognition on data, fusing data from multiple sources, combination of values (e.g., averaging values, etc.), compression, conversion (e.g., digital-to-analog conversion, analog-to-digital conversion), performing statistical estimation on data (e.g.
- Embodiments of the system and/or portions of embodiments of the system can entirely or partially be executed by, hosted on, communicate with, and/or otherwise include one or more: remote computing systems (e.g., one or more servers, at least one networked computing system, stateless, stateful; etc.), local computing systems, mobile phone devices, other mobile devices, personal computing devices, tablets, databases, application programming interfaces (APIs) (e.g., for accessing data described herein, etc.) and/or any suitable components.
- Communication by and/or between any components of the system 100 and/or other suitable components can include wireless communication (e.g., WiFi, Bluetooth, radiofrequency, Zigbee, Z-wave, etc.), wired communication, and/or any other suitable types of communication.
- Components of embodiments of the system can be physically and/or logically integrated in any manner (e.g., with any suitable distributions of functionality across the components, such as in relation to portions of embodiments of methods described.
- Embodiments of the methods 700 , 800 and/or the system 500 can include every combination and permutation of the various system components and the various method processes, including any variants (e.g., embodiments, variations, examples, specific examples, figures, etc.), where portions of the methods 700 , 800 and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances, elements, components of, and/or other aspects of the system 100 and/or other entities described herein.
- any of the variants described herein e.g., embodiments, variations, examples, specific examples, figures, etc.
- any portion of the variants described herein can be additionally or alternatively combined, aggregated, excluded, used, performed serially, performed in parallel, and/or otherwise applied.
- the system 500 and/or methods 700 , 800 and/or variants thereof can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions.
- the instructions can be executed by computer-executable components that can be integrated with the system.
- the computer-readable medium can be stored on any suitable computer-readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any suitable device.
- the computer-executable component can be a general or application specific processor, but any suitable dedicated hardware or hardware/firmware combination device can alternatively or additionally execute the instructions.
Abstract
A color predictor is provided to predict the color of a food item given its formula comprising the ingredients and its quantities. The color predictor may utilize machine learning algorithms and a set of recipe data to train the color predictor. The color predictor can also be used by a color recommender to recommend changes in the given formula to achieve a target color.
Description
- This application is a continuation of U.S. non-provisional application Ser. No. 16/596,689, filed Oct. 8, 2019, the entire contents of which is hereby incorporated by reference for all purposes as if fully set forth herein. The applicant(s) hereby rescind any disclaimer of claim scope in the parent application(s) or the prosecution history thereof and advise the USPTO that the claims in this application may be broader than any claim in the parent application(s).
- The disclosure generally relates to food science and artificial intelligence, in particular, use of machine learning to predict color of a food item and/or recommend changes to achieve a target color.
-
FIG. 1 illustrates a system comprising a color predictor configured to predict the color of a food item given its formula in certain embodiments; -
FIG. 2 illustrates a block diagram of the color predictor in certain embodiments; -
FIG. 3 illustrates an example encoding vector generated using the multi-warm encoder for an example given formula according to certain embodiments; -
FIG. 4 illustrates an embodiment of a system which can utilize an ingredient embedder to incorporate unsupervised data for training an embedding layer, in certain embodiments; -
FIG. 5 illustrates an embodiment of a system comprising a color recommender configured to provide a recommendation for changes in the given formula for a food item to achieve the desired color of the food item; -
FIG. 6 illustrates a block diagram that can be used to describe the loss function model implemented by the color recommender, according to certain embodiments; -
FIG. 7 illustrates a computer-implemented method to determine color of a food item according to certain embodiments; and -
FIG. 8 illustrates a computer-implemented method to provide recommendation to achieve target color attributes for a food item according to certain embodiments. - The following description of the embodiments is not intended to limit the invention to these embodiments, but rather to enable any person skilled in the art to make and use.
- In some instances, color of a food item prepared using a set of ingredients may not be as expected, or a chef or another entity may desire to improve or change the color of the food item without starting from scratch. In some other instances, it may be desirable to predict the color of the food item before it's prepared or cooked so that the recipe can be altered or not cooked at all. Certain embodiments can predict the color of a food item given a formula for its recipe by utilizing data science, color science, food science and machine learning algorithms. The formula may include a list of ingredients and their respective quantities. Certain embodiments can also recommend changes in the given formula to achieve a desired color with minimum modifications to the recipe.
- The food item can include plant-based ingredients, animal-based ingredients, synthetic ingredients, and/or a combination thereof. In some examples, the food item can be a plant-based food item that can mimic animal-based foods from the sensory (e.g., flavor and/or texture) and/or visual perspectives. A number of products (e.g., plant-based ingredients, etc.) are available in the market that can provide substitutes for animal-based food (e.g., animal-based food such as chicken, meat patties, milk, etc.). For example, a combination of plant-based ingredients can be cooked using a certain recipe to taste, look and/or feel like sausage.
- Certain embodiments may approach the color prediction of the food item as a regression problem. A color predictor can use machine learning algorithms to predict the color in a certain color space given a list of strings representing the ingredients of the food item. In some instances, the prediction can be used to determine whether the recipe for the food item needs to be altered or not needed at all even before the food item is cooked or prepared. The color predictor can additionally or alternatively be used as a component in a color recommender to provide a recommendation for changes in the formula for the recipe to improve the color of the food item and/or to achieve a target color. The color recommender may additionally or alternatively implement a loss function to optimize the changes in the given formula by considering attribute distance, ingredient distance, ingredient likelihood, ingredient sparsity, and/or other factors. In a specific example, the color recommender can be based on all of attribute distance, ingredient distance, ingredient likelihood, and ingredient sparsity.
-
FIG. 1 illustrates asystem 100 comprising acolor predictor 102 configured to predict the color of a food item given its formula (e.g., based on its formula, etc.). Thecolor predictor 102 may receive aninitial formula 104 of a recipe for the food item and providecolor attributes 106 of the predicted color. Thecolor attributes 106 may represent the color in a color space with floating-point values. For example, thecolor predictor 102 may determine that the color of the food item is a particular color (e.g., brown, white, yellow, etc.) based on a color palette or a color scheme. - The
initial formula 104 of the recipe for the food item preferably includes two or more ingredients and their respective quantities. Additionally or alternatively, the food item may include any suitable number of ingredients and/or their respective quantities. The recipe may include plant-based ingredients, animal-based ingredients, water-based ingredients, synthetic ingredients, and/or a combination thereof. Some non-limiting examples of the plant-based ingredients may include vegetables (e.g., onions, potatoes, peas, garlic, spinach, carrots, celery, squash, etc.), fruit (e.g., apples, pears, grapes, etc.), herbs (e.g., oregano, cilantro, basil, etc.), spices (black peppers, turmeric, red chili peppers, cinnamon, etc.), oils (e.g., corn oil, olive oil, almond oil), nuts (e.g., almonds, walnuts, pistachios, etc.), legumes (e.g., lentils, dried peas, soybeans, pulses, etc.), starch, proteins, fibers, carbohydrates, sugars, and/or other suitable plant-based ingredients, etc. Some non-limiting examples of the animal-based ingredients may include dairy products (e.g., milk, butter, cheese, yogurt, ice cream, etc.), egg-based products (e.g., mayonnaise, salad dressings, etc.), meat products (e.g., burger patties, sausages, hot dogs, bacon, etc.), seafood (e.g., fish, crab, prawns, etc.), and/or other suitable animal-based ingredients, etc. Synthetic ingredients may include artificially produced food, e.g., artificial meats, artificial sweeteners, artificial milk, and/or other suitable synthetic ingredients, etc. - As shown in
FIG. 1 , theinitial formula 104 may include a quantity 1 (e.g., a first quantity, etc.) for an ingredient a (e.g., a first ingredient, etc.), a quantity 2 (e.g., a second quantity, etc.) for an ingredient b (e.g., a second ingredient, etc.), and a quantity n (e.g., an nth quantity, etc.) for an ingredient m (e.g., an mth ingredient, etc.). The quantities may be represented using percentages, fractions, units, and/or other suitable value types and/or methods. In one example, theinitial formula 104 may include 47.2% water, 20.6% pea protein, and 15.7% coconut oil, hence the ingredient a (e.g., first ingredient, etc.) is “water”, ingredient b (e.g., second ingredient, etc.) is “pea protein”, ingredient m (e.g., mth ingredient, etc.) is “coconut oil”, quantity 1 (e.g., first quantity, etc.) is “47.2”, quantity 2 (e.g., second quantity, etc.) is “20.6” and quantity n (e.g., nth quantity, etc.) is “15.7”. - The
color predictor 102 may be configured to perform the color prediction using a prediction model based on neural networks, regression models, classification models, and/or other suitable machine learning algorithms. - In certain embodiments, color predictor(s) 102 and/or other suitable models, suitable components of embodiments, and/or suitable portions of embodiments of methods described herein can include, apply, employ, perform, use, be based on, and/or otherwise be associated with artificial intelligence approaches (e.g., machine learning approaches, etc.) including any one or more of: supervised learning (e.g., using gradient boosting trees, using logistic regression, using back propagation neural networks, using random forests, decision trees, etc.), unsupervised learning (e.g., using an Apriori algorithm, using K-means clustering), semi-supervised learning, a deep learning algorithm (e.g., neural networks, a restricted Boltzmann machine, a deep belief network method, a convolutional neural network method, a recurrent neural network method, stacked auto-encoder method, etc.), reinforcement learning (e.g., using a Q-learning algorithm, using temporal difference learning), a regression algorithm (e.g., ordinary least squares, logistic regression, stepwise regression, multivariate adaptive regression splines, locally estimated scatterplot smoothing, etc.), an instance-based method (e.g., k-nearest neighbor, learning vector quantization, self-organizing map, etc.), a regularization method (e.g., ridge regression, least absolute shrinkage and selection operator, elastic net, etc.), a decision tree learning method (e.g., classification and regression tree, iterative dichotomiser 3, C4.5, chi-squared automatic interaction detection, decision stump, random forest, multivariate adaptive regression splines, gradient boosting machines, etc.), a Bayesian method (e.g., Bayesian Linear Regression, Bayesian Neural Networks, Bayesian Logistic Regression, averaged one-dependence estimators, Bayesian belief network, etc.), a kernel method (e.g., a support vector machine, a radial basis function, a linear discriminant analysis, Gaussian processes, etc.), a clustering method (e.g., k-means clustering, expectation maximization, etc.), an associated rule learning algorithm (e.g., an Apriori algorithm, an Eclat algorithm, etc.), an artificial neural network model (e.g., a Perceptron method, a back-propagation method, a Hopfield network method, a self-organizing map method, a learning vector quantization method, etc.), a dimensionality reduction method (e.g., principal component analysis, partial least squares regression, Sammon mapping, multidimensional scaling, projection pursuit, etc.), an ensemble method (e.g., boosting, bootstrapped aggregation, AdaBoost, stacked generalization, gradient boosting machine method, random forest method, etc.), suitable classifiers (e.g., naïve Bayes, etc.), and/or any suitable artificial intelligence approach.
- In an embodiment, the ingredients a-m (and/or any suitable number and/or type of ingredients, etc.) may be represented as a list of strings which can be converted to a multi-dimensional embedding vector to represent the ingredients in a meaningful space. As an example, “word2vec”, “Glove”, and/or another suitable ingredients embedding model may be used to produce the embedding vector. Additionally, a vectorized version of a loss function (e.g., CIEDE2000) may be implemented that can be interpreted and differentiated by the prediction model. The ingredients embedding model can be trained by utilizing supervised data, unsupervised data and/or a combination thereof. In some instances, the ingredients embedding model may additionally or alternatively be used to predict the probability of each ingredient to be in a recipe. Alternatively, the ingredients can be represented in any suitable form.
- The
color attributes 106 produced by thecolor predictor 102 may be represented as a vector belonging to a certain color space. Alternatively, thecolor attributes 106 can be represented as a non-vector and/or in any suitable representation. In examples, the color space may correspond to CIELAB, RGB (red, green, blue), inverted RBG, HSV (hue, saturation, value), CMYK, and/or another suitable color space. In certain implementations, thecolor predictor 102 may use the images associated with the food item and identify a most prominent or strongest color in the food item for representing in thecolor attributes 106 belonging to the specific color space. In certain embodiments, thecolor predictor 102 is further explained with reference toFIG. 2 . -
FIG. 2 illustrates a block diagram of thecolor predictor 102 in certain embodiments. - The
color predictor 102 may include aningredient parser 202, amulti-warm encoder 204, an ingredients embedder 206, animage color extractor 208, acorrection process 210, acolor encoder 212, aloss function model 214, and aprediction model 216. Additionally or alternatively, thecolor predictor 102 can include any suitable combination of the above. -
Raw recipe data 218 may be part of a supervised dataset, and/or an unsupervised dataset. The supervised dataset may include input data and output data that may be labeled to be used as a training dataset (e.g., a first training dataset) for future data processing. For example, the supervised dataset may include data for a recipe for a food item as an input and a color attribute of the food item as the output. The unsupervised dataset may include only input data without any corresponding output data. However, any suitable data types can be used for the training dataset. - The
raw recipe data 218 may store data associated with multiple recipes of various food items (e.g., one or more recipes of one or more food items; in a 1:1 recipe to food item relationship and/or any suitable numerical relationship between recipes and food items, etc.). In certain examples, theraw recipe data 218 may store ingredients as a list of tuples of the ingredient name (e.g., string) and its proportion (e.g., percentage or quota). As an example, the ingredients may include [{“name”: “water”, “quota”: 47.197640117994105}, {“name”: “pea protein (textured)”, “quota”: 20.64896755162242}, {“name”: “coconut oil (without flavor)”, “quota”: 15.732546705998034}, . . . ]. Theraw recipe data 218 may also store recipe as a list of step strings, e.g., [{“step”: 1, “description”: “Weigh 20.65 g of pea protein (textured), 1.57 g of salt and 0.10 g of beetroot powder in a bowl”}, . . . ]. Theraw recipe data 218 may also store observed colors as a list of strings and one or more images of each food item. Theraw recipe data 218 may utilize JSON format and/or another suitable format to store the information. Theraw recipe data 218 may be stored in memory, e.g., RAM, EEPROM, flash memory, hard disk drives, optical disc drives, solid state memory, and/or any type of memory suitable for data storage. - The
ingredient parser 202 may be configured to parse theraw recipe data 218. In certain examples, theingredient parser 202 may process the ingredient strings in theraw recipe data 218 to remove any duplicate ingredients, remove parentheticals and post-comma descriptors, collapse double spaces, remove quantities using list of quantity words, filter overly-long ingredients and match to clean reference ingredients when possible, and/or any other text processing and/or other suitable processing. As an example, “organic wild berries (organic blackberries, organic strawberries, organic bilberries, organic raspberries)” may be processed to “berries.” Similarly, “½ (12 ounce) box vanilla wafer cookies (such as Nilla®), crushed very fine” may be processed to “vanilla wafer cookies.” Processing can be automatic, computational, manual, and/or other suitable types of processing. - The
multi-warm encoder 204 may be configured to represent the formula as a sparse proportional vector. In specific examples, themulti-warm encoder 204 functions to generate (e.g., output, etc.) one or moremulti-warm vectors 300, such as based on one ormore formulas 302 and/or other suitable inputs. Each dimension of the sparse proportional vector may correspond to a unique ingredient, and the value in each dimension can be the proportion of that ingredient present in the food item. In a specific example, the proportions are scaled to sum to 1 instead of 100, but any suitable type of scaling can be employed. In a specific example, the multi-warm encoded vector may include entries that are “warm” (e.g., floating point) instead of “hot” (e.g., one) with most dimensions being zeros and few dimensions being non-zero. However, the formula can be represented in any suitable manner. -
FIG. 3 shows anexample multi-warm vector 300 generated using themulti-warm encoder 204 for a givenformula 302. Theformula 302 may be similar to theinitial formula 104 inFIG. 1 . As an example, theformula 302 may include “water: 47.2%”, “pea protein: 20.6%”, and “coconut oil: 15.7%” among other ingredients. Themulti-warm encoder 204 may encode theformula 302 to produce a sparse proportional vector comprising [0, 0, 0, . . . , 0, 0.472, 0, . . . , 0, 0.206, 0, . . . , 0, 0.157, 0, . . . ] as represented by themulti-warm vector 300. The multi-warm encoding may be used to perform a weighted sum of embeddings. Since embeddings are generally trained without quantities, the multi-warm values can be scaled using each ingredient's distribution. - Referring back to
FIG. 2 , the ingredients embedder 206 may be configured to convert the ingredient strings into multi-dimensional embedding vectors that may contain pertinent information about the ingredients. The ingredient embeddings may be generated by training an embedding layer using a model similar to an autoencoder for unsupervised feature learning. In a specific example, the autoencoder is a type of artificial neural network which can be used for dimensionality reduction by copying the most relevant aspects of its input to its output. Given a set of ingredient strings, for each ingredient, the embedding layer may provide a likelihood of also being in a recipe. In some examples, the ingredients embedder 206 may be used to identify ingredients from an unsupervised dataset that may be similar to a first set of ingredients used in the supervised dataset, e.g., represented by theraw recipe data 218 inFIG. 2 . Thus, use of the ingredients embedder 206 may be beneficial to incorporate unsupervised data and domain knowledge to supplement a limited amount of supervised data. In certain embodiments, this is further explained with reference toFIG. 4 . -
FIG. 4 illustrates asystem 400 which can utilize theingredient embedder 206 to incorporate unsupervised data (e.g., as a second training dataset, etc.) for training an embedding layer, in certain embodiments. - As shown in
FIG. 4 , ingredients 402 may include a second set of ingredients belonging to an unsupervised dataset. The ingredients embedder 206 may be used to convert the ingredient strings from the ingredients 402 into multi-dimensional embedding vectors 404 to represent the ingredients in a meaningful space. In some implementations, functionality of the ingredients embedder 206 may be adapted from the word2vec (e.g., skip-gram, continuous bag-of-words (CBOW)) and/or similar model, but any suitable approaches can additionally or alternatively be used. For unsupervised feature learning, a respective embedding vector for each ingredient in the formula may be fetched from the ingredients embedder 206 to provide embedded vectors 404. Anadder 406 may be used to generate an embeddedformula 408 by performing the weighted sum of the embedded vectors 404 using their proportions in the formula. Since the embeddings can be trained without quantities in specific examples, the multi-warm vector values can be scaled using each ingredient's distribution. In some implementations, this may be performed using a dot product with an embedding matrix. The embeddedformula vector 408 may have a different (e.g., much smaller) dimension, may be dense, and/or may contain negative values. - The embedding
layer 410 may be implemented using a linear neural network layer and/or another suitable model (e.g., a different type of neural network, any suitable models described herein, etc.). The embeddinglayer 410 may be trained to incorporate unsupervised data and/or domain knowledge (e.g., a second training set) to supplement the limited amount of supervised data (e.g., a first training set) represented in theraw recipe data 218. The unsupervised data may belong to a large set of recipes obtained from the web, generated automatically or manually, and/or otherwise procured. In specific examples, leveraging the embedding layer 410 (e.g., to incorporate unsupervised data and/or domain knowledge to supplement supervised data and/or other suitable data, etc.) can confer improvements in computer function through facilitating improvements in accuracy of predicting color of a food item and/or recommending changes to achieve a target color for a food item. - The
activation function 412 may be used to generate avector 414 with a probability distribution of the ingredients, which may represent the likelihood of also being in a given recipe. In some implementations, “Softmax” or a similar activation function may be used to output thevector 414. The probability predictions in thevector 414 can add up to one. As an example, thevector 414 may include “milk: 0.03”, “butter: 0.76” and “olive oil: 0.21.” In specific examples, the probability predictions generated by thesystem 400 may be useful to utilize generally large amounts of unsupervised data for supplementing the relatively smaller amount of supervised data for predicting the color(s) of the given food item(s). - Referring back to
FIG. 2 , theimages database 220 may include images in the form of pixel values (and/or other suitable form) for each food item associated with theraw recipe data 218. In some implementations, theimages database 220 and theraw recipe data 218 may be part of the same dataset. In certain embodiments, instead of relying on observed color information of the ingredients or the food item provided in theraw recipe data 218, color of each food item can be extracted from the corresponding images. For example, in certain instances, a label for the color may be too broad (e.g., brown), multiple colors may be observed for a food item, and/or the label may include a non-color word. Thus, in certain implementations, theimage color extractor 208 inFIG. 2 may be configured to extract the color from the image of each food item stored in theimages database 220. - The
image color extractor 208 may utilize any suitable algorithm to extract the colors from the image(s) of each food item. In some implementations, given an image, theimage color extractor 208 may flatten its pixels into a list of RGB values. In some implementations, to determine an RGB value for a food item, theimage color extractor 208 may apply k-means clustering to the pixel values of the food item's image to get two clusters and then discard one of the clusters which has the center closest to the background. For example, theimage color extractor 208 may cluster the color vectors using K-means with k=2 into food color and background color. In specific examples, the images have a uniform white or black background, therefore, in certain instances; four corners of the image may be used to extract the most dominant color. Note that the RGB color space is discussed for color representation with reference to theimage color extractor 208, however, it will be understood that the colors may be represented using any suitable color space without deviating from the scope of the disclosure. Additionally or alternatively, theimage color extractor 208 can leverage any suitable approaches in extract colors from the image(s) of each food item. - In certain instances, some of the food items may have multiple separate colors (e.g., a pie's crust and filling, an empanada's outside color and inside color, a cake and its toppings, and/or food items with irregularly spread out ingredients). In such cases and/or in any suitable scenarios, preparation process in addition to the raw ingredients (and/or in addition to any suitable types of data described herein, etc.) may be used to accurately predict the color of the image. For example, in some instances, the preparation process can affect the coloration of homogeneous foods through processes such as baking.
- In certain embodiments, the
correction process 210 may be used to correct any errors for the color extraction process. The images may be inspected manually and/or by automated means for correction. In instances, where there are multiple images for a food item, the most relevant image may be used. In some instances, raw recipe data for the food items without any images, or with multiple images with highly varying colors may be discarded. Any suitable image processing techniques can be applied for processing the images into a form suitable for use by theimage color extractor 208. Image processing techniques (e.g., for use in relation to image color extraction, for use in relation to any suitable components of embodiments of the systems and/or any suitable processes of embodiments of the methods, etc.) can include any one or more of: image filtering, image transformations, histograms, structural analysis, shape analysis, object tracking, motion analysis, feature detection, object detection, stitching, thresholding, image adjustments, mirroring, rotating, smoothing, contrast reduction, and/or any other suitable image processing techniques. - The
color encoder 212 may be configured to encode the colors of the food item for representing in a color space. For example, the colors to be encoded may have been extracted using theimage color extractor 208 or provided in theraw recipe data 218 for the respective food item. The encoding may be based on RGB space, inverted RGB space, HSV space, CMYK space, CIE L*a*b* (or CIELAB or Lab) and/or another suitable space. In some instances, representation may be more or less accurate based on the regression model (and/or other suitable model) used for learning, lighting of the images, and/or other factors. In certain examples, CIELAB may provide better color representation as it is designed around human perception of color rather than computer display of the image. CIELAB color space is a standard defined by International Commission on Illumination (CIE). It can express color as three values: L* for the lightness from black (0) to white (100), a* from green (−) to red (+), and b* from blue (−) to yellow (+). In some instances, CIELAB color space may provide better results for implementing a loss function (e.g., delta E metric) for the color difference. However, any suitable color space can be used. - The
loss function 214 may be configured to improve the prediction of visually perceived color differences through the introduction of various corrections to the color represented in a particular color space. In certain embodiments, theloss function 214 may implement the CIEDE2000 delta E metric to describe the distance between two colors represented in the CIELAB color space. For example, the CIELAB color space may be designed so that its Euclidean distance is delta E. The CIEDE2000 delta E metric may be designed so that its unit distance is the “just noticeable difference” between the two colors. The “just noticeable difference” of the delta E can be usually 1, e.g., if two colors have delta E less than 1 it is unperceivable and larger than 1 it is perceivable. Although, in specific examples, the “just noticeable difference” is an indirect proxy, it can be a most suitable and objective metric for human perception and its constants can be tuned to better measure the desired color attributes. Furthermore, in specific examples, CIEDE2000 can be designed to deconstruct the color into Lightness, Chroma and Hue (LCH), and to compensate for nonlinearities in LCH and colors near blue. CIEDE2000 can also include weighting factors for L, C, H dependent on the application (e.g., cars, textiles, computer graphics, etc.). - The
prediction model 216 may be configured to take into account feedback from theloss function 214 and predict the color attributes 106 of the given food item. In certain embodiments, the color attributes 106 may be represented in the CIELAB space, e.g., L*a*b*. The CIELAB space can be represented in a three-dimensional space using a three-dimensional model. The three coordinates of CIELAB may represent the lightness of the color, its position between red/magenta and green, and its position between yellow and blue. Generally, same amount of numerical changes in the L*a*b* values correspond to roughly the same amount of visually perceived change. It will be noted that the color attributes 106 may be represented in any suitable color space (e.g., RGB, inverted RGB, HSV, etc.) without deviating from the scope of the disclosure. Theprediction model 216 may further determine that the color of the food item is a particular color from a color palette or color scheme. - The
prediction model 216 may utilize a regression model, e.g., a type of neural network and/or ordinary least squares (OLS) and/or other suitable models, based on a number of factors. In examples, the effect of ingredients on color is generally not linear, e.g., cooking can change the color of an ingredient. Furthermore, in non-uniform recipes, certain ingredients can dominate the color. Thus, in specific examples, a forward neural network may perform better than the OLS model in certain embodiments. As an example, a configuration of the neural network may include multiple dense layers with leaky ReLU activation and a final output layer with linear activation with embeddings for the input. Both the input data and the output data may be scaled for mean 0 andvariance 1. - In certain embodiments, prediction models 216 and/or other suitable models, suitable components of embodiments, and/or suitable portions of embodiments of methods described herein can include, apply, employ, perform, use, be based on, and/or otherwise be associated with artificial intelligence approaches (e.g., machine learning approaches, etc.) including any one or more of: supervised learning (e.g., using gradient boosting trees, using logistic regression, using back propagation neural networks, using random forests, decision trees, etc.), unsupervised learning (e.g., using an Apriori algorithm, using K-means clustering), semi-supervised learning, a deep learning algorithm (e.g., neural networks, a restricted Boltzmann machine, a deep belief network method, a convolutional neural network method, a recurrent neural network method, stacked auto-encoder method, etc.), reinforcement learning (e.g., using a Q-learning algorithm, using temporal difference learning), a regression algorithm (e.g., ordinary least squares, logistic regression, stepwise regression, multivariate adaptive regression splines, locally estimated scatterplot smoothing, etc.), an instance-based method (e.g., k-nearest neighbor, learning vector quantization, self-organizing map, etc.), a regularization method (e.g., ridge regression, least absolute shrinkage and selection operator, elastic net, etc.), a decision tree learning method (e.g., classification and regression tree, iterative dichotomiser 3, C4.5, chi-squared automatic interaction detection, decision stump, random forest, multivariate adaptive regression splines, gradient boosting machines, etc.), a Bayesian method (e.g., naïve Bayes, averaged one-dependence estimators, Bayesian belief network, etc.), a kernel method (e.g., a support vector machine, a radial basis function, a linear discriminant analysis, etc.), a clustering method (e.g., k-means clustering, expectation maximization, etc.), an associated rule learning algorithm (e.g., an Apriori algorithm, an Eclat algorithm, etc.), an artificial neural network model (e.g., a Perceptron method, a back-propagation method, a Hopfield network method, a self-organizing map method, a learning vector quantization method, etc.), a dimensionality reduction method (e.g., principal component analysis, partial least squares regression, Sammon mapping, multidimensional scaling, projection pursuit, etc.), an ensemble method (e.g., boosting, bootstrapped aggregation, AdaBoost, stacked generalization, gradient boosting machine method, random forest method, etc.), and/or any suitable artificial intelligence approach.
- In specific examples, the
color predictor 102 may utilize only the ingredients of the food item to predict the color since data from theraw recipe data 218 may be limited. For example, the recipe information can be unstructured, and there can be insufficient data available without structure. Predicting the color of a food item may not be a deterministic problem due to various factors—multiple dishes with different colors can be made from the same ingredients, many food items may have more than one color (e.g., cakes, pies, soups), limited size of the training dataset, and/or other suitable factors. Thus, in such cases and/or in suitable scenarios, predicting a primary color of the food item with the recipe information may provide better results. In certain embodiments, thecolor predictor 102 may additionally or alternatively be used as a component in a system that may utilize a color recommender to provide a recommendation for changes in the formula for the recipe to improve its color to match the desired color. In certain embodiments, this is further discussed with reference toFIG. 5 . -
FIG. 5 illustrates an embodiment of asystem 500 comprising acolor recommender 502 for providing (e.g., configured to provide, etc.) one or more recommendations for changes in one or moreinitial formulas 104 to achieve the desired color of the food item given target color attributes 504. - In certain embodiments, the
color recommender 502 may be based on stochastic gradient descent (SGD) and back propagation models. Thecolor recommender 502 may utilize thecolor predictor 102 to predict the color of a food item given theinitial formula 104. For example, thecolor recommender 502 can determine one or more recommendations for one or more changes (e.g., modifications, etc.) to one or moreinitial formulas 104, based on output(s) of thecolor predictor 102. Thecolor recommender 502 may additionally or alternatively implement aloss function 510 to optimize theinitial formula 104 to achieve the desired color. Thecolor recommender 502 may provide (e.g., output, determine, etc.) a recommendedformula 506 and/or new color attributes 508 corresponding to the modification in theinitial formula 104 to achieve the target color attributes 504. The target color attributes 504 and the new color attributes 508 may belong to the same color space as the color attributes 106 ofFIG. 1 , e.g., CIELAB, RGB, or another color space. Alternatively, the target color attributes 504, the new color attributes 508, and the color attributes 106 can belong to different color spaces, and/or any suitable combination of attributes can belong to any suitable combination of color spaces. As an example, theinitial formula 104 may include 47.2% water, 20.6% pea protein, and 15.7% coconut oil; the target color attributes 504 may include {R: 102, G: 23, B: 214}. - The
loss function 510 may take into consideration multiple factors while determining the changes in theinitial formula 104, e.g., attribute distance (e.g., how close is theinitial formula 104 to the target color attributes 504), ingredient distance (e.g., how close is the recommendedformula 506 to theinitial formula 104 since it is desired to produce the same or similar type of food item for which theinitial formula 104 was provided, and in specific examples to have minimal changes to theinitial formula 104, thus making the cooking process similar and/or easier such as for the chefs, etc.), ingredient likelihood (e.g., how sensible the new formula is, for example, using 80% salt in the recipe may not be typical), and/or ingredient sparsity (e.g., how many ingredients are in the recommendedformula 506, for example, it is not typical to have just a pinch of 300 ingredients). In certain embodiments, theloss function 510 is further explained with reference toFIG. 6 . -
FIG. 6 illustrates a block diagram 600 that can be used to describe the loss function model implemented by thecolor recommender 502 in certain embodiments. Thecolor recommender 502 may be designed to optimize inputs to thecolor predictor 102 using the loss function model described inFIG. 6 . - The
color recommender 502 may operate onraw parameters 602 for a given formula of a recipe to implement the loss functions. As an example, theraw parameters 602 may include ingredients in theinitial formula 104, e.g., ingredient a, ingredient b, . . . , ingredient m. In some embodiments, theraw parameters 602 may be optimized using asparsity loss function 604. Theraw parameters 602 may be further normalized using anormalization layer 606, such as before being operated by other loss functions, e.g.,likelihood loss function 610, attribute distance loss function 614, and/or ingredientdistance loss function 618, etc., but thenormalization layer 606 can be used at any suitable time. - The
sparsity loss function 604 may be implemented to bring the ingredients in theraw parameters 602 to zero unless they significantly improve the attribute loss. The attribute gradient may be balanced with the sparsity gradient. The slope near zero may be used to determine whether or not to sparse. Thesparsity loss function 604 may also implement a step sparsity loss function, a magnitude sparsity loss function, an L1 norm, among others to minimize the color difference. However, thesparsity loss function 604 can implement any suitable combination of loss functions. - The
normalization layer 606 may be used to normalize theraw parameters 602 for other loss metrics and to avoid interfering with thesparsity loss function 604. In specific examples, after each training step, raw parameters may be projected according to non-negativity and multi-warm constraints, and constrained optimization may be performed. - The normalized ingredients may be encoded to generate a multi-warm vector using a
multi-warm encoder 608. Themulti-warm encoder 608 may be similar to themulti-warm encoder 204 and the multi-warm vector may be similar to themulti-warm vector 300 shown inFIG. 3 . Thecolor recommender 502 may be designed to optimize the ingredients in the multi-warm vector, which can feed to thecolor predictor 102. -
Likelihood loss function 610 may be used to determine whether a given formula is a real or a typical formula. Likelihood may be defined as the probability of the given formula under the distribution of formulas, which may be proportional to the product of probability density function (PDF) values for all the ingredients.Likelihood loss function 610 may be operationalized as log-likelihood for numerical stability. Gradient for ingredients, which are not in the recipe, can be zeroed out below their minimum. The PDF can be infinitesimal but actual likelihood may not be. - Likelihood for the original ingredient can be subtracted off for easier interpretation without any effect on the gradient, as shown in Equation (1) below.
-
- Optimizing the likelihood may not always be preferable since it can push towards an irrelevant mode. In certain instances, the likelihood can be clipped above a threshold while avoiding any modification of the good recipes. In specific examples, the quantities are less likely than the original recipe. In examples, further improvements are possible by using quantile threshold for all ingredients, only modifying unlikely quantities similar to the original, or implementing inverse cumulative distribution function (CDF) instead of log-likelihood, among others.
- Ingredient
distance loss function 618 may be implemented to determine how close the recommendedformula 506 is to theinitial formula 104. In some examples, it may only include gradient for ingredients that were in the original recipe. In examples, since new ingredients may change the percentages of original ingredients, just the selected ingredients may be normalized. In examples, this may result in those ingredients to be a smaller part of the overall recipe; however, it may be mitigated by the likelihood component. - In examples, since the ingredient data is not normally distributed and is not linear (e.g., difference between 0.001 and 0.002 is much larger than between 0.501 and 0.502), logit-normal distribution may be used to calculate ingredient distance in the logit space. In examples, the logit (inverse sigmoid) function is generally linear near 0.5, can stretch small values, and is bounded in [0, 1]. An example logit function is shown in Equation (3), where p is probability:
-
- Logit-normal distribution generally assumes that the logit of the data is normally distributed. The parameters can be μ and σ of the logits, as shown in Equation (4). It may generally be used for modeling variables which are proportions bounded in [0, 1], where 0 and 1 may never occur.
-
- In certain embodiments,
recipe score function 616 may be used to determine L1 distance in nutritional/chemical/physical space. The recipe score function may be based on a difference between theinitial formula 104 and the recommended formula 504. The score can be determined in any suitable manner within the scope of the disclosure. -
Attribute predictor 612 may be used to determine new color attributes 508 corresponding to the recommendedformula 506. In some implementations, theattribute predictor 612 may utilize thecolor predictor 102 or functionality of thecolor predictor 102 to determine the new color attributes 508 based on the attribute distance (e.g., how close is theinitial formula 104 to the target color attributes 504). For example, the new color attributes 508 may be similar to the color attributes 106. The new color attributes 508 may belong to the same or different color space than the target color attributes. As an example, the recommendedformula 506 may include 24.5% water, 30.7% pea protein, and 10.2% almond oil, and the new color attributes 508 may include {R: 112, G: 15, B: 210}. The new color attributes 508 may correspond to the recommendedformula 506, which may provide optimal modification to theinitial formula 104. - Attribute distance loss function 614 may be used to determine how close the formula is to the target color attributes 504 based on the attribute predictions made by the
attribute predictor 612. In some examples, mean squared error (MSE) in the LAB space can be taken into account. This may be extensible to other attributes (e.g., smell, taste) so long as there is a differentiable model to predict them given a recipe. -
FIG. 7 illustrates an embodiments of a computer-implementedmethod 700 to determine color of a food item according to certain embodiments. Themethod 700 may be executed by thecolor predictor 102 inFIG. 1 or inFIG. 5 , and/or by any suitable combination of components described herein. - In
step 702, a formula for a recipe of the food item may be received. The formula may comprise a list of ingredients and its respective quantities in the recipe. The food item may include plant-based ingredients, animal-based ingredients, synthetic ingredients, and/or other suitable types of ingredients. As discussed with reference toFIG. 2 , the formula may be theinitial formula 104 received by thecolor predictor 102. The formula may comprise ingredients a-m and its respective quantities 1-n. For example, the ingredient a may be water and thequantity 1 may be 47.2%, the ingredient b may be pea protein and thequantity 2 may be 20.6%, and so on. In some examples, the formula may be part of theraw recipe data 218 inFIG. 2 . - In
step 704, the list of ingredients may be encoded to represent as an embedding vector in a color space. As discussed with reference toFIG. 2 , the list of ingredients may be encoded by themulti-warm encoder 204 or another suitable encoder. In some embodiments, the list of ingredients may be parsed by theingredient parser 202 before feeding to themulti-warm encoder 204. - In
step 704, color attributes associated with the food item can be predicted using a color predictor. The color attributes may belong to a color space. In some implementations, thecolor predictor 102 may be used to predict the color attributes 106 associated with the food item as discussed with reference toFIG. 2 . As an example, the color attributes 106 may belong to the CIELAB color space and may include floating point values. - The color predictor may have been trained using supervised dataset and/or an unsupervised dataset. In one example, the
raw recipe data 218 and the images extracted from theimages database 220 may have been used to train theprediction model 216 in thecolor predictor 102. For example, themethod 700 may also include encoding the list of ingredients to generate a multi-warm vector. The list of ingredients can be encoded by themulti-warm encoder 204 to generate a multi-warm vector comprising a respective dimension for each ingredient in the list of ingredients. In some examples, the list of ingredients may be parsed by theingredient parser 202 before feeding to themulti-warm encoder 204. - The
method 700 may further include generating the ingredient embedding vector 404 using theingredient embedder 206 given themulti-warm vector 300. Themethod 700 may further include extracting color information associated with the food item for representing in the color space. For example, the color information may be extracted using theimage color extractor 208 and the corresponding colors may be encoded using thecolor encoder 212 for representing in the color space. Themethod 700 may further include training the color predictor given the ingredient embedding vector and the color information. As an example, thecolor predictor 102 may utilize CIEDE2000 delta E metric as a loss function for improving the prediction of the color. - In
step 706, the color of the food item can be determined based on the color attributes. For example, theprediction model 216 may determine the color of the food item based on the color space the color attributes 106 may belong to. For example, theprediction model 216 may determine that the color of the food item is a particular color from a color palette or color scheme. -
FIG. 8 illustrates an embodiments of a computer-implementedmethod 800 to provide recommendation to achieve target color attributes for a food item according to certain embodiments. Themethod 800 may be executed by the color recommender 504 inFIG. 5 . - In
step 802, target color attributes are received to achieve a target color of a food item given its formula. The target color may correspond to the target color attributes. For example, the target color attributes 502 may be received by thecolor recommender 502. In some examples, the target color attributes 502 may be part of theraw parameters 602 inFIG. 6 . The target color attributes may belong to a color space, e.g., CIELAB, RGB, inverted RGB, etc. - In
step 804, using a color recommender, changes to the formula can be determined to achieve the target color. Thecolor recommender 502 may determine the recommendedformula 506 as discussed with reference toFIG. 5 andFIG. 6 . In certain embodiments, the changes in the formula may have been determined based on the SGD and back propagation models as well as one or more loss functions. For example, the loss functions may include thesparsity loss function 604, thelikelihood loss function 610, the attribute distance loss function 614, and the ingredientdistance loss function 618 to take the ingredient sparsity, ingredient likelihood, attribute distance, and the ingredient distance respectively into consideration. - In
step 806, new color attributes corresponding to the recommended formula are provided. Thecolor recommender 502 may provide the new color attributes 508 corresponding to the recommendedformula 506 using theattribute predictor 612. Theattribute predictor 612 may be similar to thecolor predictor 102 or include the functionality of thecolor predictor 102, and consider the attribute distance loss function 614 in determining the new color attributes 508. - As discussed with reference to
FIGS. 1-8 , the disclosed embodiments can utilize various machine learning algorithms and/or other suitable models to help minimize cooking time and resources by identifying even before a food item is cooked or prepared whether a recipe for the food item needs to be altered or does not need to be cooked at all, based on the color prediction. Additionally, the color recommender can use the color predictor to provide a recommendation for changes in the formula for the recipe to improve the color of the food item or to achieve a target color. - Certain embodiments can confer improvements in computer-related technology (e.g., artificial intelligence, machine learning, neural networks, etc.) by facilitating computer performance of functions not previously performable and/or not practically performed by the human mind. For example, the technology can leverage one or more computational machine learning-associated training sets for training one or more artificial intelligence models (e.g., color predictor models; color recommender models; etc.) in a non-generic, application-specific fashion for predicting color of a food item and/or recommending changes to achieve a target color of a food item.
- Certain embodiments can confer improvements in functioning of computing systems themselves through components (e.g., components of embodiments of systems described herein, etc.) and/or processes (e.g., processes of embodiments of methods described herein, etc.) for improving computational accuracy in computationally predicting color of a food item and/or computationally recommending changes to achieve a target color of a food item.
- Embodiments of the systems and/or methods can include components for and/or processes including generating one or more food items including one or more colors (e.g., food items with the one or more colors) determined based on recommendations from one or more color recommender models and/or color prediction outputs from one or more predictor models. In a specific example, the method can include determining a set of recommended changes (e.g., to a food item formula) to achieve one or more colors for a food item; and generating the food items including the one or more colors (and/or including similar colors or derivable colors) based on the set of recommended changes (e.g., implementing one or more portions of the recommended changes; implementing modifications derived based on the recommended changes; etc.).
- Embodiments of the methods can include any suitable processes and/or functionality described in relation to the figures, system, and/or described herein.
- Models described herein can be run or updated: once; at a predetermined frequency; every time a certain process is performed; every time a trigger condition is satisfied and/or at any other suitable time and frequency. Models can be run or updated concurrently with one or more other models, serially, at varying frequencies, and/or at any other suitable time. Each model can be validated, verified, reinforced, calibrated, or otherwise updated based on newly received, up-to-date data; historical data or be updated based on any other suitable data.
- Portions of embodiments of methods and/or systems described herein are preferably performed by a first party but can additionally or alternatively be performed by one or more third parties, users, and/or any suitable entities.
- Additionally or alternatively, data described herein can be associated with any suitable temporal indicators (e.g., seconds, minutes, hours, days, weeks, time periods, time points, timestamps, etc.) including one or more: temporal indicators indicating when the data was collected, determined (e.g., output by a model described herein), transmitted, received, and/or otherwise processed; temporal indicators providing context to content described by the data; changes in temporal indicators (e.g., data over time; change in data; data patterns; data trends; data extrapolation and/or other prediction; etc.); and/or any other suitable indicators related to time.
- Additionally or alternatively, parameters, metrics, inputs (e.g., formulas, ingredient attributes, other suitable features, etc.), outputs (e.g., color attributes, recommended formulas, etc.), and/or other suitable data can be associated with value types including any one or more of: scores, text values (e.g., ingredient descriptors, etc.), numerical values (e.g., color attributes, etc.), binary values, classifications, confidence levels, identifiers, values along a spectrum, and/or any other suitable types of values. Any suitable types of data described herein can be used as inputs (e.g., for different models described herein; for components of a system; etc.), generated as outputs (e.g., of models; of components of a system; etc.), and/or manipulated in any suitable manner for any suitable components.
- Additionally or alternatively, suitable portions of embodiments of methods and/or systems described herein can include, apply, employ, perform, use, be based on, and/or otherwise be associated with one or more processing operations including any one or more of: extracting features, performing pattern recognition on data, fusing data from multiple sources, combination of values (e.g., averaging values, etc.), compression, conversion (e.g., digital-to-analog conversion, analog-to-digital conversion), performing statistical estimation on data (e.g. ordinary least squares regression, non-negative least squares regression, principal components analysis, ridge regression, etc.), normalization, updating, ranking, weighting, validating, filtering (e.g., for baseline correction, data cropping, etc.), noise reduction, smoothing, filling (e.g., gap filling), aligning, model fitting, binning, windowing, clipping, transformations, mathematical operations (e.g., derivatives, moving averages, summing, subtracting, multiplying, dividing, etc.), data association, interpolating, extrapolating, clustering, image processing techniques, other signal processing operations, other image processing operations, visualizing, and/or any other suitable processing operations.
- Embodiments of the system and/or portions of embodiments of the system can entirely or partially be executed by, hosted on, communicate with, and/or otherwise include one or more: remote computing systems (e.g., one or more servers, at least one networked computing system, stateless, stateful; etc.), local computing systems, mobile phone devices, other mobile devices, personal computing devices, tablets, databases, application programming interfaces (APIs) (e.g., for accessing data described herein, etc.) and/or any suitable components. Communication by and/or between any components of the
system 100 and/or other suitable components can include wireless communication (e.g., WiFi, Bluetooth, radiofrequency, Zigbee, Z-wave, etc.), wired communication, and/or any other suitable types of communication. - Components of embodiments of the system can be physically and/or logically integrated in any manner (e.g., with any suitable distributions of functionality across the components, such as in relation to portions of embodiments of methods described.
- Embodiments of the
methods system 500 can include every combination and permutation of the various system components and the various method processes, including any variants (e.g., embodiments, variations, examples, specific examples, figures, etc.), where portions of themethods system 100 and/or other entities described herein. - Any of the variants described herein (e.g., embodiments, variations, examples, specific examples, figures, etc.) and/or any portion of the variants described herein can be additionally or alternatively combined, aggregated, excluded, used, performed serially, performed in parallel, and/or otherwise applied.
- The
system 500 and/ormethods - As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the
system 500,methods
Claims (20)
1. A method comprising:
representing, in computer storage media, an initial formula for a food item as an initial formula vector;
representing, in computer storage media, target food color attributes for the food item as a target food color vector;
applying a neural network model to the initial formula vector and the target food color vector to determine a recommended formula for the food item, the recommended formula for the food item comprising changes to the initial formula for the food item to achieve the target food color attributes for the food item.
2. The method of claim 1 , wherein the initial formula vector comprises a set of ingredients and its respective quantities.
3. The method of claim 1 , wherein the neural network model implements at least one loss function to determine the changes to the initial formula, wherein the at least one loss function considers at least one of a plurality of factors including:
an attribute distance between the initial formula and the target food color,
an ingredient distance between the initial formula and recommended formula,
an ingredient likelihood of the recommended formula to be a realistic formula, and
an ingredient sparsity of ingredients in the recommended formula.
4. The method of claim 1 , wherein the recommended formula is determined based on stochastic gradient descent (SGD) and back propagation models.
5. The method of claim 1 , wherein applying the neural network model comprises optimizing the initial formula vector using a sparsity loss function.
6. The method of claim 1 , wherein applying the neural network model comprises normalizing the initial formula vector prior being operated on by one or more of at least one loss function.
7. The method of claim 1 , wherein applying the neural network model further comprises:
predicting, using a first machine learning model, color attributes associated with the initial formula, wherein the color attributes associated with the initial formula are represented as an initial formula color vector;
predicting, using a second machine learning model, color attributes associated with the recommended formula, wherein the color attributes associated with the recommended formula are represented as a recommended formula color vector.
8. The method of claim 7 , wherein the initial formula color vector and the recommended formula color vector are used by an attribute distance loss function to determine an attribute distance between the initial formula and the target food color.
9. One or more non-transitory computer-readable storage media storing one or more instructions programmed which, when executed by one or more computing devices, cause:
representing, in computer storage media, an initial formula for a food item as an initial formula vector;
representing, in computer storage media, target food color attributes for the food item as a target food color vector;
applying a neural network model to the initial formula vector and the target food color vector to determine a recommended formula for the food item, the recommended formula for the food item comprising changes to the initial formula for the food item to achieve the target food color attributes for the food item.
10. The one or more non-transitory computer-readable storage media of claim 9 , wherein the initial formula vector comprises a set of ingredients and its respective quantities.
11. The one or more non-transitory computer-readable storage media of claim 9 , wherein the neural network model implements at least one loss function to determine the changes to the initial formula, wherein the at least one loss function considers at least one of a plurality of factors including:
an attribute distance between the initial formula and the target food color,
an ingredient distance between the initial formula and recommended formula,
an ingredient likelihood of the recommended formula to be a realistic formula, and
an ingredient sparsity of ingredients in the recommended formula.
12. The one or more non-transitory computer-readable storage media of claim 9 , wherein the recommended formula is determined based on stochastic gradient descent (SGD) and back propagation models.
13. The one or more non-transitory computer-readable storage media of claim 9 , wherein applying the neural network model comprises optimizing the initial formula vector using a sparsity loss function.
14. The one or more non-transitory computer-readable storage media of claim 9 , wherein applying the neural network model comprises normalizing the initial formula vector prior being operated on by one or more of at least one loss function.
15. The one or more non-transitory computer-readable storage media of claim 9 , wherein applying the neural network model further comprises:
predicting, using a first machine learning model, color attributes associated with the initial formula, wherein the color attributes associated with the initial formula are represented as an initial formula color vector;
predicting, using a second machine learning model, color attributes associated with the recommended formula, wherein the color attributes associated with the recommended formula are represented as a recommended formula color vector.
16. The one or more non-transitory computer-readable storage media of claim 15 , wherein the initial formula color vector and the recommended formula color vector are used by an attribute distance loss function to determine an attribute distance between the initial formula and the target food color.
17. A computing system comprising:
one or more computer systems comprising one or more hardware processors and storage media; and
instructions stored in the storage media and which, when executed by the computing system, cause the computing system to perform:
representing an initial formula for a food item as an initial formula vector;
representing target food color attributes for the food item as a target food color vector;
applying a neural network model to the initial formula vector and the target food color vector to determine a recommended formula for the food item, the recommended formula for the food item comprising changes to the initial formula for the food item to achieve the target food color attributes for the food item.
18. The computing system of claim 17 , wherein the initial formula vector comprises a set of ingredients and its respective quantities
19. The computing system of claim 17 , wherein the neural network model implements at least one loss function to determine the changes to the initial formula, wherein the at least one loss function considers at least one of a plurality of factors including:
an attribute distance between the initial formula and the target food color,
an ingredient distance between the initial formula and recommended formula,
an ingredient likelihood of the recommended formula to be a realistic formula, and
an ingredient sparsity of ingredients in the recommended formula.
20. The computing system of claim 17 , wherein applying the neural network model further comprises:
predicting, using a first machine learning model, color attributes associated with the initial formula, wherein the color attributes associated with the initial formula are represented as an initial formula color vector;
predicting, using a second machine learning model, color attributes associated with the recommended formula, wherein the color attributes associated with the recommended formula are represented as a recommended formula color vector.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/180,451 US20210174169A1 (en) | 2019-10-08 | 2021-02-19 | Method to predict food color and recommend changes to achieve a target food color |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/596,689 US10970621B1 (en) | 2019-10-08 | 2019-10-08 | Methods to predict food color and recommend changes to achieve a target food color |
US17/180,451 US20210174169A1 (en) | 2019-10-08 | 2021-02-19 | Method to predict food color and recommend changes to achieve a target food color |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/596,689 Continuation US10970621B1 (en) | 2019-10-08 | 2019-10-08 | Methods to predict food color and recommend changes to achieve a target food color |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210174169A1 true US20210174169A1 (en) | 2021-06-10 |
Family
ID=75275184
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/596,689 Active US10970621B1 (en) | 2019-10-08 | 2019-10-08 | Methods to predict food color and recommend changes to achieve a target food color |
US17/180,451 Abandoned US20210174169A1 (en) | 2019-10-08 | 2021-02-19 | Method to predict food color and recommend changes to achieve a target food color |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/596,689 Active US10970621B1 (en) | 2019-10-08 | 2019-10-08 | Methods to predict food color and recommend changes to achieve a target food color |
Country Status (2)
Country | Link |
---|---|
US (2) | US10970621B1 (en) |
WO (1) | WO2021071756A1 (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4010790A4 (en) | 2019-08-08 | 2023-08-02 | Notco Delaware, LLC | Method of classifying flavors |
US11937019B2 (en) | 2021-06-07 | 2024-03-19 | Elementary Robotics, Inc. | Intelligent quality assurance and inspection device having multiple camera modules |
US10962473B1 (en) | 2020-11-05 | 2021-03-30 | NotCo Delaware, LLC | Protein secondary structure prediction |
US11514350B1 (en) | 2021-05-04 | 2022-11-29 | NotCo Delaware, LLC | Machine learning driven experimental design for food technology |
US11348664B1 (en) | 2021-06-17 | 2022-05-31 | NotCo Delaware, LLC | Machine learning driven chemical compound replacement technology |
US11605159B1 (en) | 2021-11-03 | 2023-03-14 | Elementary Robotics, Inc. | Computationally efficient quality assurance inspection processes using machine learning |
US11404144B1 (en) | 2021-11-04 | 2022-08-02 | NotCo Delaware, LLC | Systems and methods to suggest chemical compounds using artificial intelligence |
US11373107B1 (en) | 2021-11-04 | 2022-06-28 | NotCo Delaware, LLC | Systems and methods to suggest source ingredients using artificial intelligence |
US11675345B2 (en) | 2021-11-10 | 2023-06-13 | Elementary Robotics, Inc. | Cloud-based multi-camera quality assurance architecture |
WO2023154345A1 (en) | 2022-02-09 | 2023-08-17 | Climax Foods Inc. | System and method for sensory characterization |
US11605216B1 (en) * | 2022-02-10 | 2023-03-14 | Elementary Robotics, Inc. | Intelligent automated image clustering for quality assurance |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190171707A1 (en) * | 2017-12-05 | 2019-06-06 | myFavorEats Ltd. | Systems and methods for automatic analysis of text-based food-recipes |
US20190251441A1 (en) * | 2018-02-13 | 2019-08-15 | Adobe Systems Incorporated | Reducing architectural complexity of convolutional neural networks via channel pruning |
US20210073944A1 (en) * | 2019-09-09 | 2021-03-11 | Nvidia Corporation | Video upsampling using one or more neural networks |
US20210117665A1 (en) * | 2017-11-13 | 2021-04-22 | Way2Vat Ltd. | Systems and methods for neuronal visual-linguistic data retrieval from an imaged document |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6993512B2 (en) * | 2001-06-05 | 2006-01-31 | Basf Corporation | System and method for converting a color formula using an artificial intelligence based conversion model |
EP1627332A2 (en) * | 2003-04-23 | 2006-02-22 | PolyOne Corporation | Digitally mapped formulaic color space and method of making and using same |
US8775341B1 (en) * | 2010-10-26 | 2014-07-08 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
US9870550B2 (en) | 2015-11-12 | 2018-01-16 | International Business Machines Corporation | Modifying existing recipes to incorporate additional or replace existing ingredients |
US10321705B2 (en) | 2016-02-19 | 2019-06-18 | Just, Inc. | Functional mung bean-derived compositions |
US10332276B2 (en) * | 2016-05-24 | 2019-06-25 | International Business Machines Corporation | Predicting a chromatic identity of an existing recipe and modifying the existing recipe to meet a desired set of colors by replacing existing elements of the recipe |
US11042811B2 (en) * | 2016-10-05 | 2021-06-22 | D-Wave Systems Inc. | Discrete variational auto-encoder systems and methods for machine learning using adiabatic quantum computers |
US20180203921A1 (en) * | 2017-01-17 | 2018-07-19 | Xerox Corporation | Semantic search in document review on a tangible user interface |
-
2019
- 2019-10-08 US US16/596,689 patent/US10970621B1/en active Active
-
2020
- 2020-10-02 WO PCT/US2020/054089 patent/WO2021071756A1/en active Application Filing
-
2021
- 2021-02-19 US US17/180,451 patent/US20210174169A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210117665A1 (en) * | 2017-11-13 | 2021-04-22 | Way2Vat Ltd. | Systems and methods for neuronal visual-linguistic data retrieval from an imaged document |
US20190171707A1 (en) * | 2017-12-05 | 2019-06-06 | myFavorEats Ltd. | Systems and methods for automatic analysis of text-based food-recipes |
US20190251441A1 (en) * | 2018-02-13 | 2019-08-15 | Adobe Systems Incorporated | Reducing architectural complexity of convolutional neural networks via channel pruning |
US20210073944A1 (en) * | 2019-09-09 | 2021-03-11 | Nvidia Corporation | Video upsampling using one or more neural networks |
Non-Patent Citations (1)
Title |
---|
Natarajan et al. Optimal Classification with Multivariate Losses. Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. (Year: 2016) * |
Also Published As
Publication number | Publication date |
---|---|
WO2021071756A1 (en) | 2021-04-15 |
US20210103796A1 (en) | 2021-04-08 |
US10970621B1 (en) | 2021-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10970621B1 (en) | Methods to predict food color and recommend changes to achieve a target food color | |
US11164478B2 (en) | Systems and methods to mimic target food items using artificial intelligence | |
Yu et al. | Design of experiments and regression modelling in food flavour and sensory analysis: A review | |
US10102454B2 (en) | Image classification utilizing semantic relationships in a classification hierarchy | |
US20200265497A1 (en) | Method and system for improving food-related personalization | |
Sharifzadeh et al. | Supervised feature selection for linear and non-linear regression of L⁎ a⁎ b⁎ color from multispectral images of meat | |
Sofu et al. | Estimation of storage time of yogurt with artificial neural network modeling | |
Pardede et al. | Fruit ripeness based on RGB, HSV, HSL, L* a* b* color feature using SVM | |
Bougeard et al. | From multiblock partial least squares to multiblock redundancy analysis. A continuum approach | |
Taghadomi-Saberi et al. | Determination of Cherry Color Parameters during Rip ening by Artificial Neural Network Assisted Image Process ing Technique | |
De Clercq et al. | Data-driven recipe completion using machine learning methods | |
Amiryousefi et al. | An empowered adaptive neuro-fuzzy inference system using self-organizing map clustering to predict mass transfer kinetics in deep-fat frying of ostrich meat plates | |
Cortez et al. | Lamb meat quality assessment by support vector machines | |
Kao et al. | Determination of Lycopersicon maturity using convolutional autoencoders | |
Sarkar et al. | Artificial intelligence aided adulteration detection and quantification for red chilli powder | |
Taparugssanagorn et al. | A non-destructive oil palm ripeness recognition system using relative entropy | |
Cueto et al. | Completing partial recipes using item-based collaborative filtering to recommend ingredients | |
CN115516519A (en) | Method and apparatus for hierarchy-based object recognition for visual perception | |
Kayıkçı et al. | Classification of turkish cuisine with deep learning on mobile platform | |
Tang et al. | Healthy Recipe Recommendation using Nutrition and Ratings Models | |
Torres et al. | A Computer-Aided Inspection System to Predict Quality Characteristics in Food Technology | |
Espinoza | An Application of Deep Learning Models to Automate Food Waste Classification | |
CN113807491A (en) | Method for simulating target food item using artificial intelligence | |
Tachie et al. | Predicting fatty acid classes in popular US snacks using NHANES data and machine learning approaches. Nutrients. 2023; 15: 3310 | |
Pitroda et al. | Freshy Pal: A Proposal of an Interactive Web based Tool to Automatically Detect Microbial Growth on White Bread |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |