CN110084244B - Method for identifying object based on image, intelligent device and application - Google Patents

Method for identifying object based on image, intelligent device and application Download PDF

Info

Publication number
CN110084244B
CN110084244B CN201910193840.7A CN201910193840A CN110084244B CN 110084244 B CN110084244 B CN 110084244B CN 201910193840 A CN201910193840 A CN 201910193840A CN 110084244 B CN110084244 B CN 110084244B
Authority
CN
China
Prior art keywords
loss
image
cavity
weight
image acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910193840.7A
Other languages
Chinese (zh)
Other versions
CN110084244A (en
Inventor
娄军
鹿鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Global Ai & Display Co ltd
Original Assignee
Global Ai & Display Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Global Ai & Display Co ltd filed Critical Global Ai & Display Co ltd
Priority to CN201910193840.7A priority Critical patent/CN110084244B/en
Publication of CN110084244A publication Critical patent/CN110084244A/en
Application granted granted Critical
Publication of CN110084244B publication Critical patent/CN110084244B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The method is applied to identifying the object in the cavity of the intelligent equipment, an image acquisition device is arranged in the cavity of the intelligent equipment, and the image acquisition device is used for acquiring the image of the object to be identified in the cavity of the intelligent equipment, and the method comprises the following steps: acquiring an image of an object to be identified in a cavity of the intelligent equipment by an image acquisition device; and inputting the image into a deep convolutional neural network model, and identifying the image by using the deep convolutional neural network model to obtain the type, weight, quantity and/or position of the object to be identified. The type, the weight, the number and/or the position of the object are identified by combining the image acquisition device and the deep convolutional neural network model, so that a weight sensor is avoided, and the cost is reduced.

Description

Method for identifying object based on image, intelligent device and application
Technical Field
The invention relates to the technical field of image recognition, in particular to a method for recognizing an object based on an image, intelligent equipment and application.
Background
The traditional oven can not distinguish the type, quantity and weight of baked foods, so the heating processes such as the heating temperature, time and the like of the food materials can not be actively controlled, and manual setting of a user is required. And because of the diversity of the food materials, the user has difficulty in precisely controlling the heating process, which results in the failure of baking. The existing solution is to use a computer vision method to identify the type of food by adding a camera; a weight sensor is then added to acquire the weight of the food. This solution is costly and the calibration of the weight sensor is complex.
Disclosure of Invention
The application provides a method, intelligent equipment and application based on an image recognition object, which can realize the purpose of recognizing the weight of the object without a weight sensor so as to solve the problems of higher cost and complex calibration by using the weight sensor.
According to a first aspect, in one embodiment, there is provided a method for identifying an object based on an image, where the identifying method is applied to identifying an object in a cavity of a smart device, and an image acquisition device is installed in the cavity of the smart device, and the image acquisition device is used for acquiring an image of the object to be identified in the cavity of the smart device, and the identifying method includes the steps of:
acquiring an image of an object to be identified in a cavity of the intelligent equipment by the image acquisition device;
and inputting the image into a deep convolutional neural network model, and identifying the image by using the deep convolutional neural network model to obtain the type, weight, quantity and/or position of the object to be identified.
In one embodiment, the method further comprises the step of acquiring a training dataset, and training the deep convolutional neural network model by using the acquired training dataset, wherein the step of acquiring the training dataset comprises the steps of:
creating a simulation cavity, wherein the size and the shape of the simulation cavity are the same as those of the intelligent equipment cavity;
the method comprises the steps that an image acquisition device is arranged in a simulation cavity, parameters of the image acquisition device in the simulation cavity are identical to those of the image acquisition device in the intelligent equipment cavity, and the installation position and the angle of the image acquisition device in the simulation cavity are identical to those of the image acquisition device in the intelligent equipment cavity;
image acquisition is carried out on the object in the simulation cavity by utilizing the image acquisition device in the simulation cavity;
and calibrating the characteristic values of the acquired images, wherein the calibrated characteristic values are used as training data, and the calibrated characteristic values are the types, the weights, the numbers and/or the positions of the objects.
In one embodiment, the deep convolutional neural network model is a mobilenet v2 network structure, the calibrated feature values are object types, object weights and numbers, and the loss function of the mobilenet v2 network structure is as follows: loss (cls) +alpha loss (wt) +beta loss (num), where loss (cls) is the loss function of the object class, alpha and beta are balance parameters, loss (wt) is the loss function of the object weight, and loss (num) is the loss function of the object number.
In one embodiment, the deep convolutional neural network model is an SSD network structure, the calibrated characteristic values are an object type, an object weight and a position of the object in the simulation cavity, and the loss function of the SSD network structure is: loss (cls) +alpha loss (loc) +beta loss (wt), wherein alpha and beta are balance parameters, loss (cls) is a loss function of object type, loss (loc) is a loss function of object position, and loss (wt) is a loss function of object weight.
In one embodiment, the deep convolutional neural network model is an SSD network structure, the calibrated characteristic values are object types and an object circumscribed rectangular frame, and the step of identifying the weight of the object to be identified by the SSD network structure is as follows:
calculating a mapping relation table from the image coordinates to the coordinates of a carrier body used for carrying the object in the intelligent equipment cavity;
dividing the carrier into M x N grids;
counting the relation between the area of the covering grid of different objects, the shape of the grid and the weight of the objects, and drawing a weight relation table;
mapping the circumscribed rectangular frame of the object to the carrier coordinates according to the mapping relation table;
and obtaining the weight of the object according to the type of the object, the mapped object coordinates and the weight relation table.
In one embodiment, the deep convolutional neural network model is a Multi-view CNN network structure, the calibrated characteristic values are object types, object weights and numbers, and the loss function of the Multi-view CNN network structure is as follows: loss (cls) +alpha loss (wt) +beta loss (num), wherein loss (cls) is a loss function of the object type, loss (wt) is a loss function of the object weight, loss (num) is a loss function of the object number, and alpha and beta are balance parameters.
According to a second aspect, in one embodiment, there is provided an intelligent device for identifying an object based on an image, including an image acquisition device and an identification device;
the image acquisition device is arranged in the cavity of the intelligent equipment and is used for acquiring an image of an object to be identified in the cavity and inputting the acquired image into the identification device;
the identification device is configured with a deep convolutional neural network model, and the deep convolutional neural network model identifies the image by adopting the method to obtain the type, weight, quantity or/and position of the object to be identified.
According to a third aspect, in one embodiment, there is provided a use of the method described above in an oven, an image acquisition device being mounted in a cavity of the oven, the image acquisition device being for acquiring an image of an object to be identified in the oven, comprising the steps of:
acquiring an image of food to be identified in the oven cavity by the image acquisition device;
inputting the image into a deep convolutional neural network model, and identifying the image by using the deep convolutional neural network model to obtain the type, weight, quantity and/or position of food to be identified;
and automatically controlling the heating temperature and heating time of the oven to the food according to the acquired food type, weight, quantity and/or position information so as to realize automatic control of the heating process of the oven to the food.
According to the method for identifying the object based on the image, which is disclosed by the embodiment, the type, the weight, the number and/or the position of the object are identified through the combination of the image acquisition device and the deep convolutional neural network model, so that the use of a weight sensor is avoided, and the cost is reduced.
Drawings
FIG. 1 is a flow chart for identifying objects;
FIG. 2 is a schematic diagram of a Multi-view CNN model;
FIG. 3 is a flow chart for controlling intelligent heating of an oven.
Detailed Description
The invention will be described in further detail below with reference to the drawings by means of specific embodiments.
Some intelligent devices, such as microwave ovens, steam boxes and the like, generally need to realize intelligent heating by identifying parameters such as the type and weight of food, wherein the type of the identified food is identified by an image, and the weight of the identified food is identified by a weight sensor; however, the addition of the weight sensor leads to an increase in cost of the microwave oven and the like, and the accuracy of the weight sensor is difficult to calibrate.
At present, there is also a method of calculating a target food area by a target food image and then calculating a weight of the target food by the obtained area data and the obtained target food density, but this is a target food area calculated by a certain calculation, and the calculation process requires time, and the microwave oven and the like need to heat the food in time, so the method of calculating the weight of the food by a certain time is not a good method in a microwave oven, an oven, a micro-steaming and baking all-in-one machine and the like.
Because the camera is installed in the microwave oven, the micro-steaming and baking all-in-one machine and other similar equipment, based on the camera, the application provides that the camera and the deep convolutional neural network model are combined to directly identify the type, the weight, the quantity and/or the position of food, the identification speed is high, and the accuracy is high.
Embodiment one:
in order to avoid the loss of generality and based on the conception, the embodiment provides a method for identifying objects based on images, which is applied to identifying objects in a cavity of an intelligent device, wherein an image acquisition device, in particular a camera, is arranged in the cavity of the intelligent device and is used for acquiring images of objects to be identified in the cavity of the intelligent device, and a flow chart of the method is shown in fig. 1 and specifically comprises the following steps.
S101: acquiring an image of an object to be identified in a cavity of the intelligent equipment by an image acquisition device;
s102: and inputting the image into a deep convolutional neural network model, and identifying the image by using the deep convolutional neural network model to obtain the type, weight, quantity and/or position of the object to be identified.
Because the type, weight, quantity and/or position of the object are identified by combining the camera with the deep convolutional neural network model, the deep convolutional neural network model needs to be trained before the deep convolutional neural network model is utilized for identification, and therefore training data for training the deep convolutional neural network model needs to be acquired.
Taking an object as food as an example, there are various types of food, and when the camera shoots food images, the sizes of the food in the images shot by the camera with different distances from the food are also different, and the internal space devices of different types of devices or different series of devices are also different, and the positions of the camera installed in the devices are also different, so that the differences are solved, and accurate training data are acquired, the basic scheme adopted when the training data set is acquired in the example is as follows:
creating a simulation cavity which is mirror-imaged with actual product equipment, namely, the size and the shape of the simulation cavity are the same as those of the intelligent equipment cavity which is actually produced;
the camera is arranged in the simulation cavity, and the installation position and the angle of the camera arranged in the simulation cavity are the same as those of the camera arranged in the actual intelligent equipment cavity, and the parameters of the camera arranged in the simulation cavity are the same as those of the camera arranged in the actual intelligent equipment cavity, such as the resolution, the FOV (field of view) and other parameters;
image acquisition is carried out on objects in the simulation cavity by utilizing an image acquisition device in the simulation cavity;
and calibrating the characteristic values of the acquired images, wherein the calibrated characteristic values are used as training data, and the calibrated characteristic values are the types, the weights, the numbers and/or the positions of the objects.
Through the design, the actual intelligent equipment cavity in the simulation cavity is mirrored, so that the training data obtained by the simulation cavity is utilized to train the deep convolutional neural network model in the corresponding actual intelligent equipment; that is, each type of intelligent equipment is matched with a corresponding deep convolutional neural network model and a corresponding training data set, so that in the practical application of the intelligent equipment, the food type, the food weight, the food quantity and/or the position of the food image acquired by the camera can be rapidly identified through the trained deep convolutional neural network model; therefore, the purpose of identifying the weight of the food by combining the camera with the deep convolutional neural network model is achieved without a weight sensor.
It should be noted that, in order to make the training dataset that gathers accurate, this example is when specifically gathering training dataset, still install a plurality of auxiliary cameras in the simulation cavity, the mounted position of these a plurality of auxiliary cameras is different with the mounted position of the original camera in this simulation cavity, adopt a plurality of cameras to carry out image acquisition, wherein, auxiliary camera mounted position should be with can observe the whole of waiting to discern the object as standard, can gather the position that the object probably places at random, quantity and different shapes through a plurality of auxiliary cameras, consequently, can characterize the different characteristics of waiting to discern the object through the data that original camera gathered jointly in auxiliary camera and the simulation cavity, in order to provide training data's integrality.
In addition, in order to improve the acquisition efficiency of training data and expand each acquisition view angle of the object, a specific rotating device can be adopted to enable the object to be identified to randomly rotate, so that the acquisition efficiency is improved and the acquisition view angle is expanded.
The following examples provide several deep convolutional neural network models, but are not limited to the following types.
First type: the deep convolutional neural network model is a MobileNetV2 network structure.
Aiming at the MobileNet V2 network structure, the calibrated characteristic values are the types, the weights and the numbers of the objects, namely, the types, the weights and the numbers of the objects can be identified through images acquired by the camera of the MobileNet V2 network structure.
In this example, the loss function of the MobileNetV2 network structure is: loss (cls) +alpha loss (wt) +beta loss (num), wherein loss (cls) is a loss function of the object class, preferably loss (cls) uses a cross entropy loss function or a binary cross entropy loss function; alpha and beta are balance parameters, and when alpha or beta is 0, the balance parameters are expressed without considering weight or quantity factors; loss (wt) is the loss function of the weight of the object, loss (num) is the loss function of the number of objects, and smoothL1loss is used in this example.
The model parameters are updated by calculating the Loss back propagation gradient, and different Loss can make the model focus on learning the characteristic of a certain aspect of data, so that Loss has a guiding effect on network optimization. The model can learn the relation among the three better through combining the loss of the object type, the weight and the quantity, and the relation among the three is balanced.
Preferably, when the object is food, the smoothL1loss used in this example loss (wt) is the ratio loss=wt of the difference between the predicted value and the actual value to the actual value, because the relative error in the weight of the food has a greater effect on the heating time of the oven pred -wt targ /wt targ Instead of directly using the difference loss=wt between the predicted and the actual values pred -wt targ
The method is suitable for a data set marked with the types, the amounts and the weights of the food materials, and the types, the amounts and the weights of the food materials can be directly obtained through the deep convolution neural network.
Second type: SSD network structure.
For SSD network structures, the calibrated feature values are the object type, the object weight, and the object position within the simulation cavity.
The loss function of this SSD network structure is: loss (cls) +alpha loss (loc) +beta loss (wt), where alpha and beta are balance parameters and loss (cls) is a loss function of object class, preferably using a cross entropy loss function or a binary cross entropy loss function in this example; loss (loc) is a loss function of object position, and the preferred example uses smoothL1loss function; loss (wt) is the loss function of the weight of the object, and the preferred example uses an optimized smoothL1loss function.
The model can pay more attention to food materials in the image by adding loss (cls), so that the accuracy of the type and weight of the food materials is improved, but the labeling cost of the method is higher than that of the first type. The method is suitable for a data set marked with the types, the weights and the external rectangular frames of food materials, and the types, the weights, the sizes and the positions of the food materials can be directly obtained through a deep convolution neural network.
Third type, SSD network structure.
For this SSD network structure, the characteristic values of the calibration are the type of object and the circumscribed rectangular frame of the object, and this type of SSD network structure is adapted to the data that is not directly labeled with weight but labeled with the type and circumscribed rectangular frame.
The kind of the object and the circumscribed rectangle frame can be directly obtained through the deep convolution neural network, and then the weight of the object is estimated by utilizing the density of the object. The method comprises the following specific steps:
calculating a mapping relation table from the image coordinates to the coordinates of a carrier body used for carrying the object in the intelligent equipment cavity;
dividing the carrier into M x N grids;
counting the relation between the area of the covering grid of different objects, the shape of the grid and the weight of the objects, and drawing a weight relation table;
mapping the circumscribed rectangular frame of the object to the carrier coordinates according to the mapping relation table;
and obtaining the weight of the object according to the type of the object, the mapped object coordinates and the weight relation table.
Taking an object as food and an intelligent device as an oven as an example, the method for estimating the weight of the food is as follows:
1, calculating a mapping relation table from oven image coordinates to oven baking tray coordinates;
2, dividing a baking tray of the oven into grids of MxN;
counting the relation between the area of different food material covering grids, the shapes of the grids and the weight of the food material, and drawing a weight relation table;
4, mapping the circumscribed rectangular frame of the food material to baking tray coordinates according to the mapping relation table;
and 5, estimating the weight of the food material according to the type of the food material, the mapped food material coordinates and the weight relation table.
Fourth type: multi-view CNN network architecture.
For a Multi-view CNN network structure, the calibrated characteristic values are object types, object weights and numbers.
The loss function of the Multi-view CNN network structure is: loss (cls) +αloss (wt) +βloss (num), where loss (cls) is a loss function of object class, preferably, this example uses a cross entropy loss function or a binary cross entropy loss function; loss (wt) is the loss function of the weight of the object, loss (num) is the loss function of the number of objects, and smoothL1loss is used in the preferred embodiment; alpha and beta are balance parameters, and when alpha or beta is 0, it means that weight or quantity factors are not considered.
Preferably, the training procedure using the Multi-view CNN model in this example is as follows, and the schematic diagram is shown in FIG. 2:
1. training a separate CNN model (CNN) for each view 1 -CNN n ) Each model can be mutually independent to identify the type, quantity and weight of food materials. n is the number of CNN models, and is also the number of multi-view and the number of cameras.
For example, with two cameras, two separate CNN models need to be trained;
2. fixing the CNN1-CNN model unchanged, fusing a certain layer of feature images of the trained multiple models into a new feature image, and taking the new feature image as CNN n+1 Is then trained on CNN n+1 Finally obtaining a trained Multi-view CNN model; and inputting the newly acquired object image into a Multi-view CNN model to obtain information such as the type, weight and position of the object.
By the method, the type, the weight, the number and/or the position of the object to be identified are obtained by utilizing the deep convolutional neural network model identification image, a weight sensor is not needed, and the cost is reduced.
The following example also provides an intelligent device for identifying objects based on the image based on the method, wherein the intelligent device can be a microwave oven, an oven, a micro-steaming oven, a micro-steaming and baking integrated machine and the like, and comprises an image acquisition device and an identification device;
the image acquisition device and the identification device can be connected in a wireless communication manner or in a wired manner, wherein the image acquisition device is arranged in a cavity of the intelligent equipment and is used for acquiring images of objects to be identified in the cavity and inputting the acquired images to the identification device;
the recognition device is configured with a deep convolutional neural network model, and the deep convolutional neural network model recognizes the image by adopting the method to obtain the type, weight, quantity or/and position of the object to be recognized.
Embodiment two:
based on the first embodiment, the present embodiment provides an application of the identification method in the first embodiment in an oven, wherein an image acquisition device is installed in a cavity of the oven, and the image acquisition device is used for acquiring an image of food to be identified in the oven, and a flowchart of the image acquisition device is shown in fig. 3, and the method comprises the steps of:
acquiring an image of food to be identified in the oven cavity by an image acquisition device;
inputting the image into a deep convolutional neural network model, and identifying the image by using the deep convolutional neural network model to obtain the type, weight, quantity and/or position of food to be identified;
and automatically controlling the heating temperature and heating time of the oven to the food according to the acquired food type, weight, quantity and/or position information so as to realize automatic control of the heating process of the oven to the food.
In the method, a camera is added into an oven to obtain food images in the oven; then using the deep convolutional neural network model to identify the type, weight, number and position of the food to be heated or being heated; and then intelligently controlling the heating process according to the identification result. The invention simplifies the operation of the user, improves the success rate of baking and improves the user experience. Compared with the existing solutions, the invention reduces the cost and can be used for a long time without calibration.
Based on the application, the embodiment also provides an intelligent oven, which comprises an image acquisition device, an identification device and a control device;
the image acquisition device is arranged in the cavity of the intelligent equipment and is used for acquiring an image of an object to be identified in the cavity and inputting the acquired image into the identification device;
the identification device is configured with a deep convolutional neural network model, and the deep convolutional neural network model adopts the method of the first embodiment to identify the image so as to obtain the type, weight, quantity or/and position of the food to be identified;
the control device automatically controls the heating temperature and the heating time of the heating part of the oven to the food according to the acquired food type, weight, quantity and/or position information so as to realize automatic control of the heating process of the oven to the food.
The foregoing description of the invention has been presented for purposes of illustration and description, and is not intended to be limiting. Several simple deductions, modifications or substitutions may also be made by a person skilled in the art to which the invention pertains, based on the idea of the invention.

Claims (6)

1. The method for identifying the object based on the image is characterized by being applied to identifying the object in the cavity of the intelligent equipment, wherein an image acquisition device is arranged in the cavity of the intelligent equipment and is used for acquiring the image of the object to be identified in the cavity of the intelligent equipment, and the method comprises the following steps:
acquiring an image of an object to be identified in a cavity of the intelligent equipment by the image acquisition device;
inputting the image into a deep convolutional neural network model, and identifying the image by using the deep convolutional neural network model to obtain the type, weight, quantity and/or position of an object to be identified;
the method further comprises the step of acquiring a training data set, and training the deep convolutional neural network model by utilizing the acquired training data set, wherein the step of acquiring the training data set comprises the steps of:
creating a simulation cavity, wherein the size and the shape of the simulation cavity are the same as those of the intelligent equipment cavity;
the method comprises the steps that an image acquisition device is arranged in a simulation cavity, parameters of the image acquisition device in the simulation cavity are identical to those of the image acquisition device in the intelligent equipment cavity, and the installation position of the image acquisition device in the simulation cavity is identical to the installation position angle of the image acquisition device in the intelligent equipment cavity;
image acquisition is carried out on the object in the simulation cavity by utilizing the image acquisition device in the simulation cavity;
calibrating the characteristic value of the acquired image, wherein the calibrated characteristic value is used as training data, and the calibrated characteristic value is the type, weight, number and/or position of the object;
the depth convolution neural network model is an SSD network structure, the calibrated characteristic value is an object type and an object external rectangular frame, and the steps of the SSD network structure for identifying the weight of the object to be identified are as follows:
calculating a mapping relation table from the image coordinates to the coordinates of a carrier body used for carrying the object in the intelligent equipment cavity;
dividing the carrier into M x N grids;
counting the relation between the area of the covering grid of different objects, the shape of the grid and the weight of the objects, and drawing a weight relation table;
mapping the circumscribed rectangular frame of the object to the carrier coordinates according to the mapping relation table;
and obtaining the weight of the object according to the type of the object, the mapped object coordinates and the weight relation table.
2. The method of claim 1, wherein the deep convolutional neural network model is a mobilenet v2 network structure, the calibrated eigenvalues are object type, object weight, and number, and the loss function of the mobilenet v2 network structure is: loss (cls) +alpha loss (wt) +beta loss (num), where loss (cls) is the loss function of the object class, alpha and beta are balance parameters, loss (wt) is the loss function of the object weight, and loss (num) is the loss function of the object number.
3. The method of claim 1, wherein the deep convolutional neural network model is an SSD network structure, the calibrated feature values are object type, object weight, and object position within the simulation cavity, and the loss function of the SSD network structure is: loss (cls) +alpha loss (loc) +beta loss (wt), wherein alpha and beta are balance parameters, loss (cls) is a loss function of object type, loss (loc) is a loss function of object position, and loss (wt) is a loss function of object weight.
4. The method of claim 1, wherein the deep convolutional neural network model is a Multi-view CNN network structure, the calibrated feature values are object types, object weights and numbers, and a loss function of the Multi-view CNN network structure is: loss (cls) +alpha loss (wt) +beta loss (num), wherein loss (cls) is a loss function of the object type, loss (wt) is a loss function of the object weight, loss (num) is a loss function of the object number, and alpha and beta are balance parameters.
5. An intelligent device for identifying an object based on an image is characterized by comprising an image acquisition device and an identification device;
the image acquisition device is arranged in the cavity of the intelligent equipment and is used for acquiring an image of an object to be identified in the cavity and inputting the acquired image into the identification device;
the recognition device is configured with a deep convolutional neural network model which recognizes the image to obtain the kind, weight, number and/or position of the object to be recognized by the method as claimed in any one of claims 1-4.
6. An oven applying the method of any one of claims 1-4, characterized in that an image acquisition device is installed in the oven cavity of the oven, said image acquisition device being adapted to acquire images of the food to be identified in the oven, comprising the steps of:
acquiring an image of food to be identified in the oven cavity by the image acquisition device;
inputting the image into a deep convolutional neural network model, and identifying the image by using the deep convolutional neural network model to obtain the type, weight, quantity and/or position of food to be identified;
and automatically controlling the heating temperature and heating time of the oven to the food according to the acquired food type, weight, quantity and/or position information so as to realize automatic control of the heating process of the oven to the food.
CN201910193840.7A 2019-03-14 2019-03-14 Method for identifying object based on image, intelligent device and application Active CN110084244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910193840.7A CN110084244B (en) 2019-03-14 2019-03-14 Method for identifying object based on image, intelligent device and application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910193840.7A CN110084244B (en) 2019-03-14 2019-03-14 Method for identifying object based on image, intelligent device and application

Publications (2)

Publication Number Publication Date
CN110084244A CN110084244A (en) 2019-08-02
CN110084244B true CN110084244B (en) 2023-05-30

Family

ID=67413217

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910193840.7A Active CN110084244B (en) 2019-03-14 2019-03-14 Method for identifying object based on image, intelligent device and application

Country Status (1)

Country Link
CN (1) CN110084244B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110488696B (en) * 2019-08-12 2021-05-25 上海达显智能科技有限公司 Intelligent dry burning prevention method and system
CN110705621A (en) * 2019-09-25 2020-01-17 北京影谱科技股份有限公司 Food image identification method and system based on DCNN and food calorie calculation method
CN112749587A (en) * 2019-10-30 2021-05-04 青岛海尔智能技术研发有限公司 Method and device for detecting specification of baking mold and kitchen electrical equipment
CN112750158A (en) * 2019-10-30 2021-05-04 青岛海尔智能技术研发有限公司 Method and device for detecting volume of food material and kitchen electrical equipment
CN110852299A (en) * 2019-11-19 2020-02-28 秒针信息技术有限公司 Method and device for determining eating habits of customers
CN111461071A (en) * 2020-04-30 2020-07-28 同济大学 Floor live load statistical method and system based on deep learning algorithm
CN111860629A (en) * 2020-06-30 2020-10-30 北京滴普科技有限公司 Jewelry classification system, method, device and storage medium
CN112085128B (en) * 2020-10-27 2022-06-07 苏州浪潮智能科技有限公司 Image identification method, device and medium based on pulse array
CN112720497B (en) * 2020-12-30 2022-05-17 深兰智能科技(上海)有限公司 Control method and device for manipulator, pickup device and storage medium
CN114903343A (en) * 2021-02-09 2022-08-16 海信集团控股股份有限公司 Automatic steaming and baking method and device and steaming and baking box thereof
CN113455660B (en) * 2021-05-28 2022-10-04 天津博诺智创机器人技术有限公司 Intelligent food cooperation method and breakfast preparation system applying same
CN116157801A (en) * 2021-09-22 2023-05-23 商汤国际私人有限公司 Object sequence identification method, network training method, device, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2004269298A1 (en) * 2003-03-21 2005-03-10 Lockheed Martin Corporation Target detection improvements using temporal integrations and spatial fusion
CN104042124A (en) * 2014-06-05 2014-09-17 宁波方太厨具有限公司 Intelligent oven and work control method thereof
CN106897661A (en) * 2017-01-05 2017-06-27 合肥华凌股份有限公司 A kind of Weigh sensor method of food materials image, system and household electrical appliance
CN107713785A (en) * 2017-09-27 2018-02-23 珠海格力电器股份有限公司 Oven cooking cycle method and device, baking box
WO2018040105A1 (en) * 2016-09-05 2018-03-08 合肥华凌股份有限公司 System and method for food recognition, food model training method, refrigerator and server
CN108742170A (en) * 2018-05-08 2018-11-06 华南理工大学 A kind of intelligent recognition cooking system of oven

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170173262A1 (en) * 2017-03-01 2017-06-22 François Paul VELTZ Medical systems, devices and methods

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2004269298A1 (en) * 2003-03-21 2005-03-10 Lockheed Martin Corporation Target detection improvements using temporal integrations and spatial fusion
CN104042124A (en) * 2014-06-05 2014-09-17 宁波方太厨具有限公司 Intelligent oven and work control method thereof
WO2018040105A1 (en) * 2016-09-05 2018-03-08 合肥华凌股份有限公司 System and method for food recognition, food model training method, refrigerator and server
CN106897661A (en) * 2017-01-05 2017-06-27 合肥华凌股份有限公司 A kind of Weigh sensor method of food materials image, system and household electrical appliance
CN107713785A (en) * 2017-09-27 2018-02-23 珠海格力电器股份有限公司 Oven cooking cycle method and device, baking box
CN108742170A (en) * 2018-05-08 2018-11-06 华南理工大学 A kind of intelligent recognition cooking system of oven

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Automatic food recognition and automatic cooking termination by texture analysis method in camera mounted oven;Başak Yüksel 等;《2014 22nd Signal Processing and Communications Applications Conference (SIU)》;20140612;全文 *
基于水平集的图像分割及应用技术研究;汪骏;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20170331(第3期);全文 *
连烤箱都内置了摄像头和传感器 你可以安心当吃货了;番茄社;《https://m.jiemian.com/article/302564.html》;20150611;全文 *

Also Published As

Publication number Publication date
CN110084244A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN110084244B (en) Method for identifying object based on image, intelligent device and application
JP7162699B2 (en) Heat treatment monitoring system
US10819905B1 (en) System and method for temperature sensing in cooking appliance with data fusion
CN109813434B (en) Human body identification method and device based on temperature detection and terminal equipment
CN109276147B (en) Method for obtaining internal temperature of food and cooking utensil
CN111148944A (en) Automatic cooking apparatus and method
CN111290352A (en) Baking control method and related product
CN109085113A (en) A kind of Atomatic focusing method and device for cervical exfoliated cell detection device
CN114982793B (en) Intelligent food baking method and device
WO2018140574A1 (en) Electronic oven with improved human-machine interface
CN110657617A (en) Defrosting control method and device for refrigerator and refrigerator
CN112426060A (en) Control method, cooking appliance, server and readable storage medium
CN111076365A (en) Method for automatically adjusting refrigerating capacity and heating capacity of air conditioner and air conditioner
CN110009696A (en) It is demarcated based on ant colony algorithm Optimized BP Neural Network trinocular vision
CN109798983B (en) Method and system for measuring temperature of food in cooking facility and cooking facility
WO2021195622A1 (en) System and method for classification of ambiguous objects
CN117243539A (en) Artificial intelligence obstacle surmounting and escaping method, device and control system
CN115690592A (en) Image processing method and model training method
CN113658274B (en) Automatic individual spacing calculation method for primate population behavior analysis
WO2021082285A1 (en) Method and device for measuring volume of ingredient, and kitchen appliance apparatus
KR102099286B1 (en) Method and apparatus for designing and constructing indoor electric facilities
CN115735120A (en) Temperature distribution learning device
WO2021082284A1 (en) Baking mold specification detection method and apparatus, and kitchen appliance
CN111601418A (en) Color temperature adjusting method and device, storage medium and processor
CN111026195B (en) Power control device and method for oven

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant