CN116579969A - Identification method and device for food material image, steaming and baking equipment and storage medium - Google Patents

Identification method and device for food material image, steaming and baking equipment and storage medium Download PDF

Info

Publication number
CN116579969A
CN116579969A CN202210105044.5A CN202210105044A CN116579969A CN 116579969 A CN116579969 A CN 116579969A CN 202210105044 A CN202210105044 A CN 202210105044A CN 116579969 A CN116579969 A CN 116579969A
Authority
CN
China
Prior art keywords
image
sub
pasting
gelatinized
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210105044.5A
Other languages
Chinese (zh)
Inventor
李玉强
吕守鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Smart Technology R&D Co Ltd
Haier Smart Home Co Ltd
Original Assignee
Qingdao Haier Smart Technology R&D Co Ltd
Haier Smart Home Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Smart Technology R&D Co Ltd, Haier Smart Home Co Ltd filed Critical Qingdao Haier Smart Technology R&D Co Ltd
Priority to CN202210105044.5A priority Critical patent/CN116579969A/en
Publication of CN116579969A publication Critical patent/CN116579969A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of intelligent household appliances and discloses a method for identifying food material images, which comprises the steps of acquiring image information acquired by a camera configured by steaming and baking equipment; image segmentation is carried out on the image information to obtain a plurality of sub-images; respectively extracting the characteristics of the plurality of sub-images through an image classification network model and generating gelatinized characteristic information associated with the image information; and determining the pasting condition of the image corresponding to the image information according to the pasting characteristic information. The method can improve the real-time performance of food material image pasting identification. The application also discloses a device for identifying the food material image, steaming and baking equipment and a storage medium.

Description

Identification method and device for food material image, steaming and baking equipment and storage medium
Technical Field
The application relates to the technical field of intelligent household appliances, in particular to a method and a device for identifying food material images, steaming and baking equipment and a storage medium.
Background
At present, with the rapid development of science and technology, the use frequency of intelligent household appliances by users is becoming increasingly high. Taking intelligent kitchen electricity as an example, after the oven is used for a long time, in order to know whether baked food is gelatinized, the gelatinization condition of the current baked food needs to be fed back to a user. Therefore, there is a higher demand for accurate identification of the gelatinization conditions of the steaming and baking apparatus.
The existing pasting identification mode is that after food material image information is acquired at preset time intervals, the food material image information is input into a convolutional neural network model to carry out model training. And determining the gelatinization degree of the image according to the model training result. During the baking of pizzas, the pixel values at different locations of the pizza change as the baking time increases. Then, the image of the current baking stage is fitted with the gelatinization degree, a fitting curve of the pixel value and the gelatinization degree is obtained, and the gelatinization value is determined according to the fitting curve.
In the process of implementing the embodiments of the present disclosure, it is found that at least the following problems exist in the related art:
the object of model training is food material image information, the calculated amount of model training is large, and the problem that pasting identification cannot be carried out in real time exists.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview, and is intended to neither identify key/critical elements nor delineate the scope of such embodiments, but is intended as a prelude to the more detailed description that follows.
The embodiment of the disclosure provides a method and a device for identifying food material images, steaming and baking equipment and a storage medium, so as to promote real-time performance of food material image pasting identification.
In some embodiments, the method comprises: acquiring image information acquired by a camera configured by steaming and baking equipment; image segmentation is carried out on the image information to obtain a plurality of sub-images; respectively extracting the characteristics of the plurality of sub-images through an image classification network model and generating gelatinized characteristic information associated with the image information; and determining the pasting condition of the image corresponding to the image information according to the pasting characteristic information.
In some embodiments, the apparatus comprises: a processor and a memory storing program instructions, the processor being configured to perform the identification method for food material images as described above when the program instructions are run.
In some embodiments, the steaming and baking apparatus comprises identification means for food material images as described above.
In some embodiments, the storage medium stores program instructions, wherein the program instructions, when executed, perform a method for identifying food material images as described above
The identification method, the device, the steaming and baking equipment and the storage medium for the food material image provided by the embodiment of the disclosure can realize the following technical effects:
the steaming and baking equipment divides the image information acquired by the camera to form a plurality of sub-images, and respectively performs feature extraction on each sub-image through the image classification network model to generate pasting feature information associated with the image information, so that the complexity of network training of the image classification network model is reduced, and the real-time performance of food image pasting identification is improved.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which like reference numerals refer to similar elements, and in which:
fig. 1 is a schematic diagram of an identification method for food material images according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of another method for identifying food material images provided in an embodiment of the disclosure;
FIG. 3 is a schematic diagram of another method for identifying food material images provided by embodiments of the present disclosure;
FIG. 4 is a schematic illustration of one application provided by an embodiment of the present disclosure;
FIG. 5 is another application schematic provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of another method for identifying food material images provided by embodiments of the present disclosure;
fig. 7 is a schematic diagram of another method for identifying food material images provided in an embodiment of the disclosure;
fig. 8 is a schematic diagram of another method for identifying food material images provided in an embodiment of the disclosure;
FIG. 9 is a schematic diagram of an identification device for food material images provided by embodiments of the present disclosure;
fig. 10 is a schematic diagram of another identification device for food material images provided in an embodiment of the disclosure.
Detailed Description
So that the manner in which the features and techniques of the disclosed embodiments can be understood in more detail, a more particular description of the embodiments of the disclosure, briefly summarized below, may be had by reference to the appended drawings, which are not intended to be limiting of the embodiments of the disclosure. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may still be practiced without these details. In other instances, well-known structures and devices may be shown simplified in order to simplify the drawing.
The terms first, second and the like in the description and in the claims of the embodiments of the disclosure and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe embodiments of the present disclosure. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion.
The term "plurality" means two or more, unless otherwise indicated.
In the embodiment of the present disclosure, the character "/" indicates that the front and rear objects are an or relationship. For example, A/B represents: a or B.
The term "and/or" is an associative relationship that describes an object, meaning that there may be three relationships. For example, a and/or B, represent: a or B, or, A and B.
The term "corresponding" may refer to an association or binding relationship, and the correspondence between a and B refers to an association or binding relationship between a and B.
In the embodiment of the disclosure, the intelligent home appliance refers to a home appliance formed after a microprocessor, a sensor technology and a network communication technology are introduced into the home appliance, and has the characteristics of intelligent control, intelligent sensing and intelligent application, the operation process of the intelligent home appliance often depends on the application and processing of modern technologies such as the internet of things, the internet and an electronic chip, for example, the intelligent home appliance can realize remote control and management of a user on the intelligent home appliance by connecting the electronic appliance.
In the disclosed embodiment, the terminal device refers to an electronic device with a wireless connection function, and the terminal device can be in communication connection with the intelligent household electrical appliance through connecting with the internet, or can be in communication connection with the intelligent household electrical appliance through Bluetooth, wifi and other modes. In some embodiments, the terminal device is, for example, a mobile device, a computer, or an in-vehicle device built into a hover vehicle, etc., or any combination thereof. The mobile device may include, for example, a cell phone, smart home device, wearable device, smart mobile device, virtual reality device, etc., or any combination thereof, wherein the wearable device includes, for example: smart watches, smart bracelets, pedometers, etc.
Referring to fig. 1, an embodiment of the present disclosure provides a method for identifying an image of a food material, including:
s01, the steaming and baking equipment acquires image information acquired by a camera configured by the steaming and baking equipment.
S02, the steaming and baking equipment performs image segmentation on the image information to obtain a plurality of sub-images.
S03, the steaming and baking equipment respectively performs characteristic extraction on the plurality of sub-images through the image classification network model and generates pasting characteristic information related to the image information.
S04, the steaming and baking equipment determines the gelatinization condition of the image corresponding to the image information according to the gelatinization characteristic information.
By adopting the identification method for food material images, which is provided by the embodiment of the disclosure, the steaming and baking equipment performs segmentation processing on the image information acquired by the camera to form a plurality of sub-images, and the image classification network model is used for respectively extracting the characteristics of each sub-image and generating the pasting characteristic information associated with the image information, so that the complexity of network training of the image classification network model is reduced, and the real-time performance of food material image pasting identification is improved. In addition, the method can ensure the accuracy of the gelatinization degree judgment under the condition of not losing the network training speed.
Optionally, referring to fig. 2, the steaming and baking device performs feature extraction on the multiple sub-images through the image classification network model and generates gelatinized feature information associated with the image information, including:
s11, the steaming and baking equipment inputs each sub-image into the attention network model to conduct pasting feature extraction, and a first pasting feature sub-image corresponding to each sub-image is obtained.
In this step, the pasting feature extraction is performed by the attention network model, and the size of each sub-image is maintained, so that the weight of the pasting feature can be increased and the invalid feature weight can be reduced. The invalid feature is to an image feature of the non-gelatinized image.
And S12, the steaming and baking equipment classifies the first gelatinized characteristic sub-images corresponding to the sub-images by using a KNN algorithm to generate second gelatinized characteristic sub-images corresponding to the sub-images.
In the step, the first gelatinized characteristic sub-image is classified by a KNN algorithm to generate a second gelatinized sub-characteristic image, so that the number of pixel channels of each pixel point can be increased, and a more accurate characteristic sub-image is obtained on the basis of the first gelatinized characteristic sub-image after the gelatinized characteristic is extracted.
And S13, the steaming and baking equipment inputs the second gelatinized characteristic sub-images corresponding to the sub-images into the N layers of convolution layers for network training, and gelatinized characteristic information related to the image information is obtained.
In the step, as the object of the network training is the second pasting characteristic sub-image corresponding to the sub-image after the segmentation processing, the calculated amount of the network model training is reduced, and the real-time performance of pasting identification is improved. Meanwhile, the second pasting characteristic sub-image corresponding to each sub-image is subjected to network training through the N layers of convolution layers, so that pasting characteristic information related to image information can be accurately obtained, and the accuracy of the pasting characteristic information is improved.
Thus, the real-time performance of pasting identification can be improved, and the accuracy of pasting characteristic information can be improved.
Referring to fig. 3, the steaming and baking apparatus classifies the first gelatinized feature sub-image corresponding to each sub-image by using KNN algorithm to generate the second gelatinized feature sub-image corresponding to each sub-image, including:
s21, the steaming and baking equipment extracts the pixel points of each first pasting characteristic sub-image and the adjacent pixel points associated with each pixel point.
S22, the steaming and baking equipment performs channel splicing processing on each pixel point and the adjacent pixel points corresponding to the pixel point to generate a second pasting characteristic sub-image.
In this step, the adjacent pixel points of the pixel points may be determined by using the pixel points whose distance values from the pixel points are smaller than or equal to the preset distance value as the adjacent pixel points, or by sorting the distance values from the pixel points according to the order from small to large, and determining the pixel points corresponding to the first S distance values as the adjacent pixel points. Wherein S is greater than or equal to 3 and the upper limit of S can be preset.
Therefore, the number of pixel channels of each pixel point can be increased, and the accuracy of acquiring the second pasting characteristic sub-image is improved.
As an example, the pixel feature of the ith pixel point of any one of the first pasting feature sub-images F is used with F i =(x i1 ,x i2 ,x i3 ,…,x ip ) And (3) representing. p represents the dimension of the pixel feature of the pixel point. As shown in fig. 5, F is h×w×c, and c=3. That is, F has h×w elements, each element being a one-dimensional vector having a dimension of 3. The aforementioned h×w elements collectively constitute a first pasting feature sub-image F of h×w×c.
Firstly, the steaming and baking equipment selects 3 pixel points with the nearest distance value as the ith pixel point F of F aiming at the first gelatinized characteristic sub-image F i =(x i1 ,x i2 ,x i3 ,…,x ip ) Is included in the display panel. Specifically, the pixel characteristics of adjacent pixel points are denoted as F j =(x j1 ,x j2 ,x j3 ,…,x jp ) Where j=1, 2,3.
Then, the steaming and baking equipment performs channel splicing on the ith pixel point and the adjacent pixel points, namely, merges the pixel characteristics of each of the ith pixel point and the adjacent pixel points from the dimension of the vector to generate a spliced characteristic vector corresponding to the ith pixel point:
F i,concat =(x i1 ,x i2 ,x i3 ,…,x ip ,x 11 ,x 12 ,x 13 ,…,x 1p ,x 21 ,x 22 ,x 23 ,…,x 3p ,x 31 ,x 32 ,x 33 ,…,x 3p )。
finally, all pixel points are corresponding to F i,concat The second pasted feature sub-image is generated by combining, and the second pasted feature sub-image is formed by H multiplied by W multiplied by P. Where P represents the dimension of each element of the second pasted feature sub-image, and p=c× (s+1).
Optionally, referring to fig. 6, the steaming and baking device performs feature extraction on the multiple sub-images through the image classification network model and generates gelatinized feature information associated with the image information, including:
and S31, the steaming and baking equipment inputs each sub-image into the attention network model to extract pasting characteristics, and a first pasting characteristic sub-image corresponding to each sub-image is obtained.
In this step, the attention network model may refer to related description of the attention network model in the prior art, and the embodiments of the present disclosure will not be repeated.
And S32, the steaming and baking equipment classifies the first gelatinized characteristic sub-images corresponding to the sub-images by using a KNN algorithm to generate second gelatinized characteristic sub-images corresponding to the sub-images.
S33, the steaming and baking equipment inputs the second gelatinized characteristic sub-image corresponding to each sub-image into the N layers of convolution layers for network training.
And S34, performing dimension conversion processing on each second gelatinized characteristic sub-image subjected to network training processing by the steaming and baking equipment to generate corresponding gelatinized characteristic vectors of each sub-image.
In this step, the second pasting feature sub-images are subjected to dimension conversion processing, so that the second pasting feature sub-images can be converted into one-dimensional vectors, and the computational complexity can be reduced.
And S35, the steaming and baking equipment combines the corresponding pasting characteristic vectors of the sub-images to generate pasting characteristic information.
In this way, the computational complexity can be reduced.
Optionally, referring to fig. 7, the steaming and baking device combines the corresponding gelatinized feature vectors of each sub-image to generate gelatinized feature information, including:
and S41, the steaming and baking equipment combines the sub-gelatinized feature vectors corresponding to the sub-images to obtain the gelatinized feature vectors of the images corresponding to the image information.
S42, the steaming and baking equipment performs normalization processing on the gelatinized feature vector to obtain a target gelatinized feature vector of the image corresponding to the image information.
S43, the steaming and baking equipment determines the element with the largest numerical value in the target gelatinization characteristic vector as gelatinization characteristic information.
Thus, the gelatinized feature vector is normalized, and the subsequent vector merging process is facilitated. And selecting the element with the largest numerical value from the target pasting characteristic vector as pasting characteristic information. The steaming and baking device accurately obtains the degree of gelatinization of the image through quantifiable gelatinization characteristic information. Compared with the existing model training of the whole food material image information, the method reduces the calculation complexity of image recognition.
As shown in fig. 8, an embodiment of the present disclosure provides a method for identifying an image of a food material, including:
s51, the steaming and baking equipment acquires image information acquired by a camera configured by the steaming and baking equipment.
S52, the steaming and baking equipment extracts each pixel point in the image information.
And S53, the steaming and baking equipment performs median filtering processing on each pixel point of the image information by using a median filter, and updates the image information.
In this step, the principle of the median filter is to set the gray value of each pixel point to be the median of the gray values of all pixels in the neighborhood of the pixel point. Because the illumination intensity inside the steaming and baking equipment is not constant in the using process of the steaming and baking equipment, and the camera arranged on the steaming and baking equipment may be in a focusing inaccuracy condition, the situation can cause noise in the image, and the noise can interfere with the subsequent image recognition. Thereby reducing the accuracy of image recognition. In order to reduce the interference of the noise on the image recognition, a median filter is adopted to carry out median filtering processing on each pixel point of the image information so as to remove the noise in the image.
As an example, the neighborhood of the pixel point may be an area where the pixel point is located in a preset area with the pixel point as a center point. The preset area may be a rectangular area with the pixel point as the center. The selection of the preset area according to the embodiments of the present disclosure may not be limited. For example, the gray value of the target pixel is 9, and the gray values of the pixels located in the rectangular region centered on the target pixel are 1,2, 1, 3, 1. The rectangular area comprises 9 pixel points, and the target pixel point is positioned at the center of the rectangular area. After median filtering treatment, the gray value of the target pixel point is the median of 1,2, 1, 3, 1 and 1, namely the gray value of the target pixel point is updated to be 1.
S54, the steaming and baking equipment performs image segmentation on the new image information to obtain a plurality of sub-images.
And S55, the steaming and baking equipment respectively performs characteristic extraction on the plurality of sub-images through the image classification network model and generates pasting characteristic information related to the image information.
S56, the steaming and baking equipment determines the gelatinization condition of the image corresponding to the image information according to the gelatinization characteristic information.
By adopting the identification method for the food material image, which is provided by the embodiment of the application, the noise in the image can be effectively removed, so that the quality of the image is improved, and the accuracy of the image identification in the subsequent step is improved.
Optionally, the steaming and baking device determines the gelatinization condition of the image corresponding to the image information according to the gelatinization characteristic information, including:
and the steaming and baking equipment determines that the image corresponding to the image information is in a pasting state under the condition that the pasting characteristic information is matched with a preset pasting characteristic value.
Therefore, the steaming and baking equipment can represent the gelatinization characteristic information in a quantifiable mode, can reflect the gelatinization degree of the image more accurately, can obtain the gelatinization state of the image by matching the gelatinization characteristic information with a preset gelatinization characteristic value, and effectively realizes the difficulty of image gelatinization identification.
In this step, the pasting characteristic information is matched with a preset pasting characteristic value, which may be that the pasting characteristic information is greater than or equal to the preset pasting characteristic value, or that the pasting characteristic information is located in a preset range corresponding to the preset pasting characteristic value. Embodiments of the present disclosure may not be particularly limited thereto.
In practical application, the steaming and baking apparatus is taken as an example of an oven as shown in fig. 4.
Firstly, an oven performs image segmentation on image information acquired by a camera to divide the image into 16 sub-images F 0i ,i=1,2,…,16。
Then, the oven inputs the 16 sub-images to the attention network model for pasting feature extraction, respectively, to generate a first pasting feature sub-image. Classifying the first gelatinized feature sub-image by KNN algorithm to generate a second gelatinized feature sub-image F 1i ,i=1,2,…,16。
Thirdly, inputting the second gelatinized characteristic sub-image into a network model with a convolution kernel size of 3 layers of convolution layers and a convolution layer step length of 2 to perform network training to obtain F 2i I=1, 2, …,16. Oven re-placing F 2i I=1, 2, …,16 are input into a network model with a convolution kernel size of 3 layers of convolution layers and a convolution layer step length of 2 for network training to obtain F 3i I=1, 2, …,16. Oven re-placing F 3i I=1, 2, …,16 are input into a network model with a convolution kernel size of 3 layers of convolution layers and a convolution layer step length of 2 for network training to obtain F 4i ,i=1,2,…,16。
The oven then applies F 4i Performing dimension conversion processing and then combining to generate one-dimensional vector M 5 . The M is set 5 And after normalization processing, obtaining the target gelatinized feature vector.
Finally, the oven determines the element with the largest value in the target gelatinization feature vector as gelatinization feature information.
As shown in conjunction with fig. 9, an embodiment of the present disclosure provides an identification apparatus for food material images, including an acquisition module 21, a segmentation module 22, a determination module 23, and an execution module 24. The acquisition module 21 is configured to acquire image information acquired by a camera configured by the steaming and baking equipment; the segmentation module 22 is configured to image-segment the image information to obtain a plurality of sub-images; the determining module 23 is configured to perform feature extraction on the plurality of sub-images through the image classification network model and generate gelatinized feature information associated with the image information, respectively; the execution module 24 is configured to determine, based on the pasting characteristic information, a pasting condition of an image corresponding to the image information.
By adopting the identification device for the food material image, which is provided by the embodiment of the disclosure, the real-time performance of the gelatinization identification of the food material image is improved, and the accuracy of gelatinization degree judgment is ensured under the condition of not losing the network training speed.
As shown in connection with fig. 10, an embodiment of the present disclosure provides an identification apparatus for food material images, including a processor (processor) 100 and a memory (memory) 101. Optionally, the apparatus may further comprise a communication interface (Communication Interface) 102 and a bus 103. The processor 100, the communication interface 102, and the memory 101 may communicate with each other via the bus 103. The communication interface 102 may be used for information transfer. The processor 100 may call logic instructions in the memory 101 to perform the identification method for food material images of the above-described embodiments.
Further, the logic instructions in the memory 101 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product.
The memory 101 is a computer readable storage medium that can be used to store a software program, a computer executable program, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 100 executes the functional application and the data processing by executing the program instructions/modules stored in the memory 101, i.e., implements the identification method for food material images in the above-described embodiment.
The memory 101 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created according to the use of the terminal device, etc. Further, the memory 101 may include a high-speed random access memory, and may also include a nonvolatile memory.
The embodiment of the disclosure provides steaming and baking equipment, which comprises the identification device for food material images.
Embodiments of the present disclosure provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-described identification method for food material images.
The disclosed embodiments provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the above-described identification method for food material images.
The computer readable storage medium may be a transitory computer readable storage medium or a non-transitory computer readable storage medium.
Embodiments of the present disclosure may be embodied in a software product stored on a storage medium, including one or more instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of a method according to embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium including: a plurality of media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or a transitory storage medium.
The above description and the drawings illustrate embodiments of the disclosure sufficiently to enable those skilled in the art to practice them. Other embodiments may involve structural, logical, electrical, process, and other changes. The embodiments represent only possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in, or substituted for, those of others. Moreover, the terminology used in the present application is for the purpose of describing embodiments only and is not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a," "an," and "the" (the) are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this disclosure is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, when used in the present disclosure, the terms "comprises," "comprising," and/or variations thereof, mean that the recited features, integers, steps, operations, elements, and/or components are present, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising one …" does not exclude the presence of other like elements in a process, method or apparatus comprising such elements. In this context, each embodiment may be described with emphasis on the differences from the other embodiments, and the same similar parts between the various embodiments may be referred to each other. For the methods, products, etc. disclosed in the embodiments, if they correspond to the method sections disclosed in the embodiments, the description of the method sections may be referred to for relevance.
Those of skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. The skilled artisan may use different methods for each particular application to achieve the described functionality, but such implementation should not be considered to be beyond the scope of the embodiments of the present disclosure. It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the embodiments disclosed herein, the disclosed methods, articles of manufacture (including but not limited to devices, apparatuses, etc.) may be practiced in other ways. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the units may be merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form. The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to implement the present embodiment. In addition, each functional unit in the embodiments of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than that disclosed in the description, and sometimes no specific order exists between different operations or steps. For example, two consecutive operations or steps may actually be performed substantially in parallel, they may sometimes be performed in reverse order, which may be dependent on the functions involved. Each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims (10)

1. A method for identifying an image of a food material, comprising:
acquiring image information acquired by a camera configured by steaming and baking equipment;
image segmentation is carried out on the image information to obtain a plurality of sub-images;
respectively extracting the characteristics of the plurality of sub-images through an image classification network model and generating gelatinized characteristic information associated with the image information;
and determining the pasting condition of the image corresponding to the image information according to the pasting characteristic information.
2. The method of claim 1, wherein the feature extracting the plurality of sub-images by the image classification network model and generating the pasting feature information associated with the image information, respectively, comprises:
inputting each sub-image into an attention network model for pasting feature extraction, and obtaining a first pasting feature sub-image corresponding to each sub-image;
classifying the first gelatinized characteristic sub-images corresponding to the sub-images by using a KNN algorithm to generate second gelatinized characteristic sub-images corresponding to the sub-images;
and inputting the second gelatinized characteristic sub-image corresponding to each sub-image into an N-layer convolution layer for network training to obtain gelatinized characteristic information associated with the image information.
3. The method of claim 2, wherein classifying the first gelatinized feature sub-image corresponding to each sub-image using a KNN algorithm to generate the second gelatinized feature sub-image corresponding to each sub-image comprises:
extracting pixel points of each first pasting feature sub-image and adjacent pixel points associated with each pixel point;
and performing channel splicing processing on each pixel point and the adjacent pixel points corresponding to the pixel point to generate the second pasting characteristic sub-image.
4. The method according to claim 2, wherein after the inputting the second gelatinized feature sub-image corresponding to each sub-image into the N-layer convolution layer for network training, further comprises:
performing dimension conversion processing on each second gelatinized characteristic sub-image subjected to network training processing to generate corresponding gelatinized characteristic vectors of each sub-image;
and merging the corresponding pasting feature vectors of the sub-images to generate pasting feature information.
5. The method of claim 4, wherein the merging the corresponding gelatinized feature vectors of the sub-images to generate the gelatinized feature information comprises:
normalizing the gelatinized feature vectors corresponding to the sub-images to obtain target gelatinized feature vectors;
and determining the element with the largest numerical value in the target gelatinized feature vector as the gelatinized feature information.
6. The method according to any one of claims 1 to 5, wherein image segmentation is performed on the image information, and before obtaining a plurality of sub-images, further comprising:
extracting each pixel point in the image information;
and carrying out median filtering processing on each pixel point of the image information by using a median filter, and updating the image information so as to carry out image segmentation processing according to the updated new image information.
7. The method according to any one of claims 1 to 5, wherein determining, according to the pasting characteristic information, a pasting condition of an image corresponding to the image information includes:
and under the condition that the pasting characteristic information is matched with a preset pasting characteristic value, determining that the image corresponding to the image information is in a pasting state.
8. An identification device for food material images comprising a processor and a memory storing program instructions, characterized in that the processor is configured to perform the identification method for food material images according to any one of claims 1 to 7 when the program instructions are run.
9. A steaming and baking apparatus comprising the recognition device for food material images according to claim 8.
10. A storage medium storing program instructions which, when executed, perform the method for identifying food material images according to any one of claims 1 to 7.
CN202210105044.5A 2022-01-28 2022-01-28 Identification method and device for food material image, steaming and baking equipment and storage medium Pending CN116579969A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210105044.5A CN116579969A (en) 2022-01-28 2022-01-28 Identification method and device for food material image, steaming and baking equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210105044.5A CN116579969A (en) 2022-01-28 2022-01-28 Identification method and device for food material image, steaming and baking equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116579969A true CN116579969A (en) 2023-08-11

Family

ID=87536495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210105044.5A Pending CN116579969A (en) 2022-01-28 2022-01-28 Identification method and device for food material image, steaming and baking equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116579969A (en)

Similar Documents

Publication Publication Date Title
US9741137B2 (en) Image-based color palette generation
US9552656B2 (en) Image-based color palette generation
CN107529650B (en) Closed loop detection method and device and computer equipment
US11163978B2 (en) Method and device for face image processing, storage medium, and electronic device
US9311889B1 (en) Image-based color palette generation
CN108898643B (en) Image generation method, device and computer readable storage medium
CN111179419B (en) Three-dimensional key point prediction and deep learning model training method, device and equipment
CN110765860A (en) Tumble determination method, tumble determination device, computer apparatus, and storage medium
CN112581438B (en) Slice image recognition method and device, storage medium and electronic equipment
CN109952596B (en) Skin information processing method and skin information processing device
CN103353881B (en) Method and device for searching application
CN106650615A (en) Image processing method and terminal
CN110807110A (en) Image searching method and device combining local and global features and electronic equipment
CN111784665A (en) OCT image quality assessment method, system and device based on Fourier transform
WO2019209751A1 (en) Superpixel merging
CN114266894A (en) Image segmentation method and device, electronic equipment and storage medium
CN111260655A (en) Image generation method and device based on deep neural network model
CN114299363A (en) Training method of image processing model, image classification method and device
CN110427870B (en) Eye picture recognition method, target recognition model training method and device
CN111291611A (en) Pedestrian re-identification method and device based on Bayesian query expansion
US10430629B2 (en) Non-transitory computer-readable medium storing information processing program and information processing device
CN113205045A (en) Pedestrian re-identification method and device, electronic equipment and readable storage medium
CN116579969A (en) Identification method and device for food material image, steaming and baking equipment and storage medium
CN112801045B (en) Text region detection method, electronic equipment and computer storage medium
CN117152787A (en) Character clothing recognition method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication