WO2018040105A1 - Système et procédé de reconnaissance d'aliments, procédé d'apprentissage de modèle d'aliments, réfrigérateur et serveur - Google Patents

Système et procédé de reconnaissance d'aliments, procédé d'apprentissage de modèle d'aliments, réfrigérateur et serveur Download PDF

Info

Publication number
WO2018040105A1
WO2018040105A1 PCT/CN2016/098120 CN2016098120W WO2018040105A1 WO 2018040105 A1 WO2018040105 A1 WO 2018040105A1 CN 2016098120 W CN2016098120 W CN 2016098120W WO 2018040105 A1 WO2018040105 A1 WO 2018040105A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
food material
refrigerator
training
model
Prior art date
Application number
PCT/CN2016/098120
Other languages
English (en)
Chinese (zh)
Inventor
杨世清
石周
唐红强
Original Assignee
合肥华凌股份有限公司
合肥美的电冰箱有限公司
美的集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合肥华凌股份有限公司, 合肥美的电冰箱有限公司, 美的集团股份有限公司 filed Critical 合肥华凌股份有限公司
Priority to PCT/CN2016/098120 priority Critical patent/WO2018040105A1/fr
Publication of WO2018040105A1 publication Critical patent/WO2018040105A1/fr

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F25REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
    • F25DREFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
    • F25D11/00Self-contained movable devices, e.g. domestic refrigerators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor

Definitions

  • the invention belongs to the technical field of electrical appliances manufacturing, and particularly relates to a food material identification system, a food material identification method, a food material model training method, a refrigerator and a server.
  • refrigerators not only carry the function of preserving food, but also become part of the home network, providing more for family members.
  • Intelligent service As a module for front-end information collection provides a basis for the establishment of a subsequent food library.
  • the traditional image recognition technology has low recognition rate and low real-time performance in complex scenes, and can not be well applied to the identification of a large number of ingredients in the refrigerator.
  • the computational complexity is not easy to use in embedded systems.
  • the present invention aims to solve at least one of the technical problems in the related art to some extent.
  • an object of the present invention is to provide a food material identification system which has a high recognition rate and a strong model generalization ability.
  • the invention also provides a food material identification method, an food material model training method, and a refrigerator and a server.
  • a food material identification system includes: a refrigerator, the refrigerator includes an image collection device, the image collection device is configured to collect an image of the foodstuff in the refrigerator; and the server obtains the refrigerator.
  • An image of the inner food material, and the image is identified according to a food material model, the food material model being a neural network obtained by training through a deep learning algorithm.
  • the food material identification system of the embodiment of the invention combines local photographing and remote recognition, and uses the computing power of the server to apply the deep learning algorithm to image recognition, and the recognition rate is improved, which can be better applied to the foodstuffs with a larger amount in the refrigerator. Identification.
  • the image collection device includes: a camera module for collecting an image of the foodstuff in the refrigerator; a communication module, configured to send an image of the foodstuff in the refrigerator to the server; and a control module, respectively Controlling the camera module and the communication module.
  • the image capturing device further includes: a lighting module, configured to illuminate an environment in which the foodstuff in the refrigerator is located, and is more convenient for the camera module to collect the image of the foodstuff.
  • a lighting module configured to illuminate an environment in which the foodstuff in the refrigerator is located, and is more convenient for the camera module to collect the image of the foodstuff.
  • control module after receiving the door closing signal sent by the controller of the refrigerator, controls the camera module to collect an image of the foodstuff in the refrigerator, and controls the communication module to send the image to the office Said server.
  • the food material model is obtained by training: taking an image of the foodstuff in the refrigerator that is pre-acquired and calibrated as a training image, and determining a model parameter according to the depth learning algorithm based on the training image to obtain the food material model.
  • the food material model includes a convolutional neural network, a recurrent neural network, and a circulating neural network.
  • a food material identification method comprises the steps of: obtaining an image of the foodstuff in the refrigerator; and identifying the image according to the food material model to obtain the food material information, wherein the food material model is obtained by training through a deep learning algorithm Neural network.
  • the recognition rate is improved, and the more complicated food material identification in the refrigerator can be better dealt with.
  • the food material model is obtained by training: taking an image of the foodstuff in the refrigerator collected and calibrated in advance as a training image, and determining a model parameter according to the depth learning algorithm based on the training image to obtain the food material model.
  • the food material model includes one of a convolutional neural network, a recurrent neural network, and a circulating neural network.
  • a food material model training method includes the steps of: obtaining an image of a food material in a refrigerator that is pre-acquired and calibrated as a training image; and processing the training image according to a depth learning algorithm. Determine model parameters to obtain a food model.
  • the food material model training method uses the deep learning algorithm to take the food image in the refrigerator as input data, and obtains the food material model which can be applied to the identification of the food in the refrigerator to improve the recognition rate.
  • the food material model includes one of a convolutional neural network, a recurrent neural network, and a circulating neural network.
  • the convolutional neural network comprises a convolutional layer, a pooling layer, an excitation layer and a fully connected layer
  • the input feature of the first layer is the training image
  • the output features of each layer are used as input features of the next layer
  • the depth learning algorithm processing the training image further comprises: convolution layer performing feature compression on the input feature by convolution operation; the pooling layer performing pooling processing on the input feature; and the excitation layer obtaining the output feature by using the input feature through the excitation function
  • the output features are normalized; the input features of the fully connected layer and the output features are all reconnected between all nodes.
  • the input data is pooled (downsampled), which can reduce the redundancy of parameters. Normalizing the data after inputting the excitation function can improve the effectiveness of backpropagation and improve the generalization ability of the model.
  • the convolutional neural network includes a plurality of convolutional layers, a pooling layer, and an excitation layer, and a fully connected layer of the tail of the convolutional neural network, wherein different convolutional cores of the convolutional layer acquire different Output characteristics.
  • the input parameters are convoluted multiple times through different convolution kernels, and the parameters are more global.
  • the food material model training method further comprises: adjusting the model parameters of the food material model according to the actual food material information, thereby ensuring the generalization ability and the learning ability of the model.
  • a refrigerator includes: an image collecting device, the image capturing The apparatus is for taking an image of the foodstuff in the refrigerator and transmitting the image to a server, so that the server identifies the image according to the foodstuff model to obtain the foodstuff information, wherein the foodstuff model is trained by a deep learning algorithm And get the neural network.
  • the refrigerator of the embodiment of the invention sends the image of the collected food material to the server through the image collecting device, and provides a data foundation for the server to use the deep learning algorithm to identify the food material in the refrigerator.
  • the image collection device includes: a camera module for collecting an image of the foodstuff in the refrigerator; a communication module, configured to send an image of the foodstuff in the refrigerator to the server; and a control module, respectively Controlling the camera module and the communication module.
  • the image collecting device further includes: a lighting module, configured to illuminate an environment in which the foodstuff in the refrigerator is located, and more conveniently collect the image of the foodstuff.
  • control module after receiving the door closing signal sent by the controller of the refrigerator, controls the camera module to collect an image of the foodstuff in the refrigerator, and controls the communication module to send the image to the office Said server.
  • the server obtains an image of the foodstuff in the refrigerator, and identifies the image according to the food material model to obtain the food material information, wherein the food material model passes Neural network obtained by deep learning algorithm training.
  • the server of the embodiment of the present invention applies the deep learning algorithm to image recognition based on its powerful computing capability, and has a simple method and improved recognition rate, and can be better applied to the identification of a large number of food materials in the refrigerator.
  • the food material model is obtained by training: taking an image of the foodstuff in the refrigerator collected and calibrated in advance as a training image, and determining a model parameter according to the depth learning algorithm based on the training image to obtain the food material model.
  • the food material model includes one of a convolutional neural network, a recurrent neural network, and a circulating neural network.
  • FIG. 1 is a block diagram of a food material identification system in accordance with an embodiment of the present invention.
  • FIG. 2 is a block diagram of a food material identification system in accordance with one embodiment of the present invention.
  • FIG. 3 is a block diagram of a refrigerator in accordance with another embodiment of the present invention.
  • FIG. 4 is a flow chart of a method for identifying a foodstuff according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a food material model training method according to an embodiment of the present invention.
  • FIG. 6 is a process flow diagram of a convolutional neural network based algorithm in accordance with an embodiment of the present invention.
  • FIG. 7 is a flow chart of feature extraction of a training image in accordance with another embodiment of the present invention.
  • Figure 8 is a flow chart of a food material model training process in accordance with yet another embodiment of the present invention.
  • FIG. 9 is a block diagram of a refrigerator in accordance with one embodiment of the present invention.
  • FIG. 10 is a block diagram of a refrigerator in accordance with another embodiment of the present invention.
  • the food material identification system 1000 includes a refrigerator 100 and a server 200.
  • the refrigerator 100 includes an image capture device 10 for collecting images of foodstuffs in the refrigerator.
  • the refrigerator 100 and the server 200 can perform data interaction.
  • the server 200 obtains an image of the foodstuff in the refrigerator, and identifies the obtained image according to the food material model to obtain the food material information, such as the category of the foodstuff and its coordinates, wherein the food material model is a neural network obtained by training the deep learning algorithm, and the deep learning is performed.
  • the algorithm introduces the identification of the food in the refrigerator, and by increasing the number of neural network layers, the mapping relationship between output and input can be better, and the recognition accuracy is improved.
  • the food material identification system 1000 of the embodiment of the present invention combines local photographing and remote recognition, and utilizes the computing power of the server 200 to apply the deep learning algorithm to image recognition.
  • the method is simple, the recognition rate is improved, and the method can be better applied to the refrigerator 100. Identification of a large number of ingredients.
  • the image capture device 10 includes a camera module 11, a communication module 12, and a control module 13.
  • the camera module 11 is configured to collect an image of the foodstuff in the refrigerator 100
  • the communication module 12 is configured to send an image of the foodstuff in the refrigerator 100 to the server 200
  • the control module 13 is configured to respectively control the camera module 11 and the communication module 12.
  • the image capture device 10 further includes a lighting module 14 for illuminating the environment in which the foodstuffs in the refrigerator 100 are located, so that the camera module 11 can capture the image of the foodstuff.
  • the control module 13 controls the camera module 11 to collect an image of the foodstuff in the refrigerator after receiving the door closing signal from the controller of the refrigerator 100, and controls the communication module 12 to send the image to the server 200.
  • the refrigerator in the embodiment of the present invention can be provided with an image capturing device 10, including a control module 13, such as a CPU (Central Processing Unit), and a refrigerator thereof.
  • a control module 13 such as a CPU (Central Processing Unit)
  • the control circuit 01 of the refrigerator 100 is controlled by a controller 02 such as an MCU (Microcontroller Unit), the CPU and the MCU can perform data interaction, and the CPU peripheral circuit can mount related peripherals, in the present invention.
  • the related peripherals include a lighting module 14, a camera module 11 and a communication module 12 such as a WIFI module.
  • the CPU controls the camera module 11 to cooperate with the illumination module 14 to complete the shooting of the local food image.
  • the control module 13 controlling the camera module 11 and the illumination module 14 to jointly capture an image, uploading the captured image to the server 200 through the WIFI module, and the server 200 performs identification of the food material information, after which the server 200 can feed the identification information to the local or provide Give relevant personnel the recognition results.
  • the food product model deployed on the server 200 can be obtained by training: taking an image of the food in the refrigerator that is pre-acquired and calibrated as a training image, and determining a model parameter according to the depth learning algorithm based on the training image.
  • the food material model may include one of a convolutional neural network, a recurrent neural network, and a cyclic neural network, and the model parameters may be determined by a BP algorithm.
  • the food model training can be completed on the off-line side, taking the image of the food in the refrigerator or in the simulated refrigerator environment and manually marking the position of various ingredients, obtaining the input vector through the operation of the deep learning algorithm, and then pre-calibrating As a result, the whole connection is performed, and the parameters of the model are determined by training a large number of training images, which is equivalent to determining the coefficient in the functional relationship.
  • the parameter training is completed, the food material model is obtained, and the food material model is deployed on the server 200.
  • the training process for the food model is described in detail below.
  • the food material identification method includes the following steps:
  • S1 obtaining an image of the food in the refrigerator.
  • an image of the food in the refrigerator is collected by an image acquisition device and sent to a server.
  • the image is identified according to the food material model to obtain the food material information, wherein the food material model is a neural network obtained by training through a deep learning algorithm.
  • the food material model is obtained by training in the following manner: an image of the foodstuff in the refrigerator that is pre-acquired and calibrated is used as a training image, and the model parameter is determined based on the training image according to the depth learning algorithm to obtain the food material model.
  • the food material model may include one of a convolutional neural network, a recurrent neural network, and a cyclic neural network, and the model parameters may be determined by a BP algorithm.
  • the food material identification method of the embodiment of the invention identifies the food material information by using the deep learning method, and the recognition rate is improved, and the method is simple, and the complex material identification in the refrigerator can be better dealt with.
  • FIG. 5 is a flowchart of a food material model training method according to an embodiment of the present invention. As shown in FIG. 5, the food material model training method includes the following steps:
  • the training image is processed according to a depth learning algorithm to determine a model parameter to obtain a food material model.
  • the food material model may include a convolutional neural network, a recurrent neural network, and a circulating neural network.
  • a convolutional neural network e.g., a convolutional neural network
  • a recurrent neural network e.g., a recurrent neural network
  • a circulating neural network e.g., a convolutional neural network
  • the training process of the model parameters of the neural network can be implemented by the BP algorithm, which has been described in many cases and will not be described here.
  • There are different processing flows for different neural networks and their differences in different data sets are mainly determined by the structure of the neural network.
  • the input parameters are different for the same neural network structure, and the model parameters are also different.
  • the food material model uses the image of the food material in the refrigerator as input data, determines the model parameters according to the deep learning method, and then deploys the food material model on the identification server, and applies the food material identification in the refrigerator to improve the recognition rate and facilitate Identification of complex situations.
  • the food material model in the refrigerator can be used as the input data by using the deep learning algorithm, and the food material model can be applied to the identification of the foodstuff in the refrigerator, and the recognition rate is improved.
  • the training process of the food model namely the food neural network
  • the deep network can better fit the relationship between the input food image and the output recognition result, and has a high recognition rate.
  • the convolutional neural network comprises a convolutional layer, a pooling layer, an excitation layer and a fully connected layer
  • the input features of the first layer are training images
  • the output features of each layer are used as input features of the next layer.
  • the processing of the training image according to the deep learning algorithm further comprises: the convolution layer performs feature compression on the input feature by convolution operation; the pooling layer performs pooling on the input feature, and the input data is pooled (downsampled), The redundancy of the parameters is reduced; the excitation layer obtains the output features through the excitation function, and normalizes the output features, and the commonly used excitation functions include Relu, Maxout, sigmoid, and the like.
  • the input data after the excitation function is normalized, which can improve the effectiveness of backpropagation and improve the generalization ability of the model; the input features of the fully connected layer and the output features have the right to reconnect between all nodes, ie, The output data of one layer and the output data of the next layer are fully connected.
  • the full connection layer is usually at the end of the network, and the data of the last layer is the calibration result of the training image.
  • the processing flow of the convolutional neural network is as shown in FIG. 6.
  • the feature extraction is first performed to obtain the feature image, that is, the convolution operation is performed, and the feature image is downsampled, that is, pooled, and then, The feature vector obtained after the normalization process is input to the neural network as fully connected layer data.
  • the parameters of the food material model are determined by a large amount of image training, thereby obtaining a food material model.
  • the food model can adjust and design the network structure according to the actual situation and needs.
  • the food material model adopts the convolutional neural network structure
  • the training image is convoluted to obtain a feature image
  • the feature image is pooled.
  • the redundant information contained in the input vector can be reduced, the dimension is reduced, and the amount of computation is greatly reduced.
  • the feature extraction process of the convolutional neural network includes: S710, obtaining a training image; S720, using a convolution kernel to perform convolution operation on the training image, S730, adding offset; S740, obtaining features image.
  • the training image is processed by the selected convolution kernel, firstly based on the principle of local perception field.
  • the principle of local perception field It is believed that people's perception of the outside world is from local to global, and the spatial connection of images is also a close connection of local pixels, while the correlation of pixels far away is weak. Therefore, it is not necessary for each neuron to perceive the global image. It only needs to perceive the local part. Then, the local information is combined at a higher level to obtain global information.
  • the statistical features of the various parts of the image are the same, which also means that the features learned in this part can also be used in another part, so the same learning characteristics can be used for all positions on the training image.
  • the same convolution kernel can be used for processing to obtain a feature image.
  • convolution operations Through convolution operations, dimensionality reduction of the image to be processed can be achieved. Using a feature to describe an image has a loss of information. Different convolution kernels can be used to convolve the training image separately to obtain a series of feature images.
  • the next step can be used to classify these features.
  • all the extracted features can be used to train the classifier, but the feature image expresses only a certain feature of the image, and there is still a large redundancy.
  • redundancy With the spatial correlation of static pictures, redundancy can be further reduced by downsampling.
  • This kind of aggregation operation is called pooling. According to the pooling method, it is also called average pooling or maximum pooling.
  • the problem of generalization ability is reduced, and the excitation function causes the data distribution to change, and the data is normalized after the excitation layer, that is, the data of the excitation function is normalized. deal with.
  • the normalization process can also increase the training speed. Normalization can simply rescale the input parameters of each layer to a mean of 0 and a distribution of variance. Through normalization, generalization ability can be improved and training speed can be improved.
  • the food model can be completed at the offline end, as shown in Figure 8.
  • the food model training process includes: S810, the training image is manually marked with the position of various ingredients; S820, convolution operation; S830, pool processing; S840, normalized The input operation obtains an input vector; in S850, the normalized processed data is fully connected with the calibration result of the training image.
  • the convolutional neural network generally includes a plurality of convolution layers, a pooling layer, and an excitation layer, and a convolutional neural network tail.
  • Fully connected layer That is to say, the training image is subjected to multiple convolution operations, pooling processes, and normalization, wherein different convolution kernels of the convolutional layer can be understood as acquiring different output features.
  • Features obtained by a layer of convolution operation are often local, and the higher the number of layers, the more comprehensive the features obtained. Therefore, the use of multi-layer convolution can make the features more global, and the food model can be made as needed. Specific design.
  • model parameters of the food material model can be adjusted based on the actual food material information.
  • the image of the food material uploaded to the server may be determined by manual calibration, and the actual food material information may be compared with the recognition result. For the image identifying the error or the food not included in the previous food product model, the above may be adopted.
  • the training process of the food model is retrained to adjust the parameters of the food model.
  • the food material training method of the present application can be based on the food material information for the food material model.
  • the model parameters are adjusted to suit the new ingredients.
  • there is no effective monitoring for the recognition result and the food with low recognition rate cannot be optimized.
  • the method of the present application uses the supervised learning method to train the food material model, and detects the recognition result, and Model parameter adjustment can improve the generalization ability and learning ability of the model.
  • FIG. 9 is a block diagram of a refrigerator including an image capture device 10, as shown in FIG. 9, in accordance with an embodiment of the present invention.
  • the image capture device 10 is configured to adopt an image of the foodstuff in the refrigerator, and transmit the image to a server, so that the server recognizes the image according to the food material model to obtain the food material information, wherein the food material model is a nerve obtained by training through a deep learning algorithm.
  • the internet is a block diagram of a refrigerator including an image capture device 10, as shown in FIG. 9, in accordance with an embodiment of the present invention.
  • the image capture device 10 is configured to adopt an image of the foodstuff in the refrigerator, and transmit the image to a server, so that the server recognizes the image according to the food material model to obtain the food material information, wherein the food material model is a nerve obtained by training through a deep learning algorithm.
  • the internet The internet.
  • the refrigerator 100 of the embodiment of the present invention sends an image of the collected food material to the server through the image capturing device 10, and provides a data foundation for the server to use the deep learning algorithm to identify the foodstuff in the refrigerator.
  • the image capture device 10 includes a camera module 11, a communication module 12, and a control module 13.
  • the camera module 11 is configured to collect an image of the foodstuff in the refrigerator;
  • the communication module 12 is configured to send an image of the foodstuff in the refrigerator to the server; and
  • the control module 13 is configured to respectively control the camera module 11 and the communication module 12.
  • the image capture device 10 further includes a lighting module 14 for illuminating the environment in which the foodstuffs in the refrigerator 100 are located, so that the camera module 11 can be photographed to obtain an image of the foodstuff.
  • the control module 13 controls the camera module 11 to collect an image of the foodstuff in the refrigerator after receiving the door closing signal from the controller of the refrigerator 100, and controls the communication module 12 to send the image to the server 200.
  • a food product model can be deployed in the server 100.
  • the server 100 obtains an image of the foodstuff in the refrigerator, and identifies the image according to the foodstuff model to obtain the foodstuff information, such as the category of the foodstuff and its coordinates, wherein the foodstuff model is A neural network obtained by training in deep learning algorithms.
  • the deep learning algorithm is introduced into the identification of the food in the refrigerator, and the mapping relationship between the output and the input can be better by increasing the number of neural network layers, thereby improving the recognition accuracy.
  • the server 200 of the embodiment of the present invention applies the deep learning algorithm to image recognition based on its powerful computing capability, and has a simple method and improved recognition rate, and can be better applied to the identification of a large number of food materials in the refrigerator.
  • the food material model is obtained by training in the following manner: an image of the food in the refrigerator that is pre-acquired and calibrated is used as a training image, and the model parameter is determined according to the depth learning algorithm based on the training image to obtain a food material model.
  • the food material model may include one of a convolutional neural network, a recurrent neural network, and a cyclic neural network, and the model parameters may be determined by a BP algorithm.
  • the food model training can be completed at the offline end, the image of the food in the refrigerator is photographed or the image of the food in the simulated refrigerator environment is photographed, and the position of various ingredients is manually marked, and the input vector is obtained by the operation of the deep learning algorithm, and further Fully connected to the pre-calibrated results, trained through a large number of images to determine model parameters, equivalent to The coefficient in the function relationship is obtained, and the parameter training is completed to obtain the food material model, and the food material model is deployed on the server 200, so that when the image recognition is performed, the server 200 can input the obtained food material image in the refrigerator as an input, according to the food material model. Realize the identification of the location and category of the ingredients, and provide identification results, which can be fed back to the refrigerator or provided to the appropriate personnel.
  • the embodiments of the present invention provide a food material identification system, an food material identification method, a food material model training method, a refrigerator, and a server, and apply a deep learning algorithm to the food material identification, using a combination of local camera and online recognition.
  • the computing power of the server performs image recognition.
  • image recognition In the training stage of the food material model, a large number of food material images are collected, and the food ingredients in the image are marked, for example, 4-point mark, and the spatial coordinate information and the pixel information are processed for the food material to establish a deep neural network model for various food materials.
  • the feature mapping method is used to extract various characteristic parameters of the image, that is, the features of the image are preserved, and the redundancy is removed, and the operation is greatly reduced.
  • the identification phase after uploading the local image, the food product model is matched on the server side to generate the food and its coordinates.
  • the inspection stage by manually calibrating the food and position of the image to be measured, comparing the recognition results of the server, the recognition rate result is obtained, and the model parameters are adjusted through retraining to improve the actual performance.
  • any process or method description in the flowcharts or otherwise described herein may be understood to include one or more steps for implementing a particular logic function or process. Modules, segments or portions of code of executable instructions, and the scope of preferred embodiments of the invention includes additional implementations, which may not be in the order shown or discussed, including in a substantially simultaneous manner depending on the functionality involved. The functions are performed in the reverse order, which should be understood by those skilled in the art to which the embodiments of the present invention pertain.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system to realise.
  • a suitable instruction execution system to realise.
  • if implemented in hardware as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals.
  • Discrete logic circuits application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Combustion & Propulsion (AREA)
  • Mechanical Engineering (AREA)
  • Thermal Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Cold Air Circulating Systems And Constructional Details In Refrigerators (AREA)

Abstract

L'invention concerne un système de reconnaissance d'aliments. Le système de reconnaissance d'aliments comprend : un réfrigérateur et un serveur, le réfrigérateur comprenant un appareil de collecte d'images utilisé pour collecter des images d'aliments dans le réfrigérateur. Le serveur acquiert les images d'aliments dans le réfrigérateur et reconnaît les images selon un modèle d'aliment de façon à acquérir des informations d'aliment. Le modèle d'aliments est un réseau neuronal acquis par apprentissage au moyen d'un algorithme d'apprentissage profond. Le système de reconnaissance d'aliments présente une capacité de reconnaissance élevée et une capacité de généralisation de modèle. L'invention concerne également un procédé de reconnaissance d'aliment, un procédé d'apprentissage de modèle d'aliment, un réfrigérateur et un serveur.
PCT/CN2016/098120 2016-09-05 2016-09-05 Système et procédé de reconnaissance d'aliments, procédé d'apprentissage de modèle d'aliments, réfrigérateur et serveur WO2018040105A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/098120 WO2018040105A1 (fr) 2016-09-05 2016-09-05 Système et procédé de reconnaissance d'aliments, procédé d'apprentissage de modèle d'aliments, réfrigérateur et serveur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/098120 WO2018040105A1 (fr) 2016-09-05 2016-09-05 Système et procédé de reconnaissance d'aliments, procédé d'apprentissage de modèle d'aliments, réfrigérateur et serveur

Publications (1)

Publication Number Publication Date
WO2018040105A1 true WO2018040105A1 (fr) 2018-03-08

Family

ID=61299729

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/098120 WO2018040105A1 (fr) 2016-09-05 2016-09-05 Système et procédé de reconnaissance d'aliments, procédé d'apprentissage de modèle d'aliments, réfrigérateur et serveur

Country Status (1)

Country Link
WO (1) WO2018040105A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084244A (zh) * 2019-03-14 2019-08-02 上海达显智能科技有限公司 基于图像识别物体的方法、智能设备及应用
US20190384990A1 (en) * 2018-06-15 2019-12-19 Samsung Electronics Co., Ltd. Refrigerator, server and method of controlling thereof
CN111797719A (zh) * 2020-06-17 2020-10-20 武汉大学 一种食物成分识别方法
DE102020207371A1 (de) 2020-06-15 2021-12-16 BSH Hausgeräte GmbH Erkennen von Lagergut in Haushalts-Lagerungsvorrichtungen
CN114422689A (zh) * 2021-12-03 2022-04-29 国网山西省电力公司超高压变电分公司 一种基于边缘智能的硬压板状态识别装置和方法
WO2023125491A1 (fr) * 2021-12-30 2023-07-06 青岛海尔电冰箱有限公司 Procédé de gestion d'aliment à contour spécifique, support de stockage, et réfrigérateur
CN117975444A (zh) * 2024-03-28 2024-05-03 广东蛟龙电器有限公司 一种用于食品破碎机的食材图像识别方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104239433A (zh) * 2014-08-27 2014-12-24 百度在线网络技术(北京)有限公司 可联网冰箱及其信息推荐方法和系统
CN104482715A (zh) * 2014-12-31 2015-04-01 合肥华凌股份有限公司 冰箱中食物的管理方法和冰箱
CN105512676A (zh) * 2015-11-30 2016-04-20 华南理工大学 一种智能终端上的食物识别方法
US20160165113A1 (en) * 2013-04-23 2016-06-09 Lg Electronics Inc. Refrigerator And Control Method For The Same
CN105701507A (zh) * 2016-01-13 2016-06-22 吉林大学 基于动态随机池化卷积神经网络的图像分类方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160165113A1 (en) * 2013-04-23 2016-06-09 Lg Electronics Inc. Refrigerator And Control Method For The Same
CN104239433A (zh) * 2014-08-27 2014-12-24 百度在线网络技术(北京)有限公司 可联网冰箱及其信息推荐方法和系统
CN104482715A (zh) * 2014-12-31 2015-04-01 合肥华凌股份有限公司 冰箱中食物的管理方法和冰箱
CN105512676A (zh) * 2015-11-30 2016-04-20 华南理工大学 一种智能终端上的食物识别方法
CN105701507A (zh) * 2016-01-13 2016-06-22 吉林大学 基于动态随机池化卷积神经网络的图像分类方法

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190384990A1 (en) * 2018-06-15 2019-12-19 Samsung Electronics Co., Ltd. Refrigerator, server and method of controlling thereof
US11521391B2 (en) * 2018-06-15 2022-12-06 Samsung Electronics Co., Ltd. Refrigerator, server and method of controlling thereof
EP4254285A3 (fr) * 2018-06-15 2023-11-15 Samsung Electronics Co., Ltd. Réfrigérateur, serveur et procédé de commande de celui-ci
CN110084244A (zh) * 2019-03-14 2019-08-02 上海达显智能科技有限公司 基于图像识别物体的方法、智能设备及应用
CN110084244B (zh) * 2019-03-14 2023-05-30 上海达显智能科技有限公司 基于图像识别物体的方法、智能设备及应用
DE102020207371A1 (de) 2020-06-15 2021-12-16 BSH Hausgeräte GmbH Erkennen von Lagergut in Haushalts-Lagerungsvorrichtungen
WO2021254740A1 (fr) 2020-06-15 2021-12-23 BSH Hausgeräte GmbH Identification de produits stockés dans des dispositifs de stockage domestiques
CN111797719A (zh) * 2020-06-17 2020-10-20 武汉大学 一种食物成分识别方法
CN111797719B (zh) * 2020-06-17 2022-09-02 武汉大学 一种食物成分识别方法
CN114422689A (zh) * 2021-12-03 2022-04-29 国网山西省电力公司超高压变电分公司 一种基于边缘智能的硬压板状态识别装置和方法
WO2023125491A1 (fr) * 2021-12-30 2023-07-06 青岛海尔电冰箱有限公司 Procédé de gestion d'aliment à contour spécifique, support de stockage, et réfrigérateur
CN117975444A (zh) * 2024-03-28 2024-05-03 广东蛟龙电器有限公司 一种用于食品破碎机的食材图像识别方法

Similar Documents

Publication Publication Date Title
WO2018040105A1 (fr) Système et procédé de reconnaissance d'aliments, procédé d'apprentissage de modèle d'aliments, réfrigérateur et serveur
US20210227126A1 (en) Deep learning inference systems and methods for imaging systems
US11869227B2 (en) Image recognition method, apparatus, and system and storage medium
US10322510B2 (en) Fine-grained object recognition in robotic systems
CN107798277A (zh) 食材识别系统和方法、食材模型训练方法、冰箱和服务器
US10599958B2 (en) Method and system for classifying an object-of-interest using an artificial neural network
US11741736B2 (en) Determining associations between objects and persons using machine learning models
US10929945B2 (en) Image capture devices featuring intelligent use of lightweight hardware-generated statistics
WO2018188453A1 (fr) Procédé de détermination d'une zone de visage humain, support de stockage et dispositif informatique
WO2020125623A1 (fr) Procédé et dispositif de détection de corps vivant, support d'informations et dispositif électronique
US20170124400A1 (en) Automatic video summarization
US20120274755A1 (en) System and method for human detection and counting using background modeling, hog and haar features
CN113330450A (zh) 用于识别图像中的对象的方法
US20180137643A1 (en) Object detection method and system based on machine learning
JP2018175226A5 (fr)
CN112082999A (zh) 一种工业产品缺陷检测方法和工业智能相机
WO2019068931A1 (fr) Procédé et système de traitement de données d'image
KR102274581B1 (ko) 개인화된 hrtf 생성 방법
US11748612B2 (en) Neural processing device and operation method thereof
KR20200009530A (ko) 이상 개체 검출 시스템 및 방법
CN112329510A (zh) 跨域度量学习系统和方法
US20220346855A1 (en) Electronic device and method for smoke level estimation
WO2022183321A1 (fr) Procédé de détection, appareil et dispositif électronique
TWI766237B (zh) 海上物件測距系統
KR20230123226A (ko) Ai 기반 객체인식을 통한 감시 카메라 영상의 노이즈 제거

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16914670

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16914670

Country of ref document: EP

Kind code of ref document: A1