CN114187248A - Food quality detection method and device, electronic equipment and storage medium - Google Patents

Food quality detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114187248A
CN114187248A CN202111454961.6A CN202111454961A CN114187248A CN 114187248 A CN114187248 A CN 114187248A CN 202111454961 A CN202111454961 A CN 202111454961A CN 114187248 A CN114187248 A CN 114187248A
Authority
CN
China
Prior art keywords
image
food
region
model
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111454961.6A
Other languages
Chinese (zh)
Inventor
章海
扶建方
刘东海
黄飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shengdoushi Shanghai Science and Technology Development Co Ltd
Original Assignee
Shengdoushi Shanghai Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shengdoushi Shanghai Technology Development Co Ltd filed Critical Shengdoushi Shanghai Technology Development Co Ltd
Priority to CN202111454961.6A priority Critical patent/CN114187248A/en
Publication of CN114187248A publication Critical patent/CN114187248A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products

Abstract

The present disclosure relates to a food quality detection method and apparatus, an electronic device, and a storage medium, the method including: carrying out region segmentation on a food image to be detected by adopting an image segmentation model to obtain a plurality of region images; for each regional image in the plurality of regional images, adopting a preset model corresponding to the regional category of the regional image to evaluate the regional image to obtain an evaluation result of the regional image; and generating a detection result of the food image according to the evaluation result of each region image, wherein the detection result is used for indicating the quality of the food. The embodiment of the disclosure can improve the accuracy of detection.

Description

Food quality detection method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a method and an apparatus for detecting food quality, an electronic device, and a storage medium.
Background
In the pizza industry, the quality of pizza determines sales volume to a certain extent, and in the process of making pizza, chefs have different technical proficiency, which may cause different quality of pizza and affect the taste of users. For a large chain of brand stores, arranging personnel to perform quality detection consumes a lot of manpower, and is not practical. Under the condition, the restaurant manager can be well assisted to know the pizza making level of a chef by means of an artificial intelligence technology, and the restaurant is assisted to perform self-checking through the recognition result, so that the production level is improved, and the customer satisfaction is enhanced. How to improve the detection accuracy becomes an urgent problem to be solved in the current food quality detection.
Disclosure of Invention
The disclosure provides a food quality detection method and device, an electronic device and a storage medium, which can improve the accuracy of food quality detection.
According to an aspect of the present disclosure, there is provided a food quality detection method, including: carrying out region segmentation on a food image to be detected by adopting an image segmentation model to obtain a plurality of region images;
for each regional image in the plurality of regional images, adopting a preset model corresponding to the regional category of the regional image to evaluate the regional image to obtain an evaluation result of the regional image;
and generating a detection result of the food image according to the evaluation result of each region image, wherein the detection result is used for indicating the quality of the food.
In one possible implementation, the region categories include: the model comprises at least one of cake edges, cheese and stuffing, wherein the preset model corresponding to the cake edges comprises a cake edge color model and/or a cake edge integrity model, the cheese corresponds to the cheese color model, and the stuffing corresponds to the stuffing dispersity model.
In a possible implementation manner, the evaluating the region image by using a preset model corresponding to a region category of the region image to obtain an evaluation result of the region image includes:
under the condition that the area type of the area image is a cake edge, evaluating the area image by adopting the cake edge color model, and determining whether the cake edge color of the food meets a first preset condition;
and/or the presence of a gas in the gas,
under the condition that the region type of the region image is a cake edge, evaluating the region image by adopting the cake edge integrity model, and determining whether the cake edge integrity of the food meets a second preset condition;
and/or the presence of a gas in the gas,
under the condition that the region type of the region image is cheese, evaluating the region image by adopting the cheese color model to determine whether cheese color of food meets a third preset condition;
and/or the presence of a gas in the gas,
and under the condition that the region type of the region image is the filling, evaluating the region image by adopting the filling dispersion degree model, and determining whether the filling dispersion degree of the food meets a fourth preset condition.
In a possible implementation manner, the generating a detection result of the food image according to the evaluation result of each region image includes:
and generating a visual detection result graph of the food image according to the evaluation result of each region image, wherein the visual detection result graph comprises the food image, the position of each region image in the food image and the evaluation result of each region image.
In one possible implementation, the method further includes:
and sending the visual detection result map to a server, wherein the food image is transmitted based on printable characters, and the position of each area image in the food image and the evaluation result of each area image are transmitted based on a standard digital format character string.
In a possible implementation manner, the preset model includes a cascaded global pooling module, the cascaded global pooling module includes a global average pooling layer and a global maximum pooling layer, the area image is evaluated by using the preset model corresponding to the preset category of the area image, and the evaluation result of the area image is obtained, which includes:
and extracting the characteristic information of the region image by adopting a cascade pooling module in a preset model corresponding to the preset category of the region image, and obtaining an evaluation result of the region image based on the characteristic information of the region image.
According to an aspect of the present disclosure, a food quality detection system is provided, which in one possible implementation includes an edge device and a server, wherein the edge device includes an image acquisition module and a food quality detection module;
the image acquisition module is used for acquiring food images to be detected;
the food quality detection module is used for detecting the food quality of the food image acquired by the image acquisition module through the food quality detection method of any one of claims 1 to 6 to obtain a detection result of the food image, and the detection result is used for indicating the quality of the food;
the server is used for receiving and displaying the detection result of the food image.
In one possible implementation, the server is further configured to:
and recording the detection result of the food image and returning a recording completion message to the image acquisition module.
In one possible implementation, the server is further configured to:
and sending an image segmentation model and a preset model corresponding to each region type to the food quality detection module.
In one possible implementation, the server is further configured to:
and adjusting parameters of the image segmentation model and the preset models corresponding to the region types according to the detection result of the food image, and sending the adjusted image segmentation model and the adjusted preset models corresponding to the region types to the food quality detection module.
In one possible implementation, the image acquisition module is further configured to:
in the event that a moving object is detected, the food product image is acquired.
In one possible implementation, the image acquisition module is further configured to:
extracting feature information of a currently acquired food image;
determining the similarity of the characteristic information and the characteristic information of the previously collected food image;
discarding the currently acquired food image under the condition that the similarity is greater than a preset threshold;
and under the condition that the similarity is smaller than or equal to a preset threshold value, sending the currently acquired food image to the food quality detection module.
According to an aspect of the present disclosure, there is provided a food quality detection apparatus, the apparatus including:
the segmentation module is used for carrying out region segmentation on the food image to be detected by adopting an image segmentation model to obtain a plurality of region images;
the evaluation module is used for evaluating each regional image in the plurality of regional images obtained by the segmentation module by adopting a preset model corresponding to the regional category of the regional image to obtain an evaluation result of the regional image;
and the generating module is used for generating a detection result of the food image according to the evaluation result of each region image, and the detection result is used for indicating the quality of the food.
In one possible implementation, the region categories include: the model comprises at least one of cake edges, cheese and stuffing, wherein the preset model corresponding to the cake edges comprises a cake edge color model and/or a cake edge integrity model, the cheese corresponds to the cheese color model, and the stuffing corresponds to the stuffing dispersity model.
In one possible implementation, the evaluation module is further configured to:
under the condition that the area type of the area image is a cake edge, evaluating the area image by adopting the cake edge color model, and determining whether the cake edge color of the food meets a first preset condition;
and/or the presence of a gas in the gas,
under the condition that the region type of the region image is a cake edge, evaluating the region image by adopting the cake edge integrity model, and determining whether the cake edge integrity of the food meets a second preset condition;
and/or the presence of a gas in the gas,
under the condition that the region type of the region image is cheese, evaluating the region image by adopting the cheese color model to determine whether cheese color of food meets a third preset condition;
and/or the presence of a gas in the gas,
and under the condition that the region type of the region image is the filling, evaluating the region image by adopting the filling dispersion degree model, and determining whether the filling dispersion degree of the food meets a fourth preset condition.
In one possible implementation, the generating module is further configured to:
and generating a visual detection result graph of the food image according to the evaluation result of each region image, wherein the visual detection result graph comprises the food image, the position of each region image in the food image and the evaluation result of each region image.
In one possible implementation, the apparatus further includes:
and the sending module is used for sending the visual detection result graph to a server, wherein the food images are transmitted based on printable characters, and the position of each area image in the food images and the evaluation result of each area image are transmitted based on a standard digital format character string.
In a possible implementation manner, the preset model includes a cascaded global pooling module, the cascaded global pooling module includes a global average pooling layer and a global maximum pooling layer, and the evaluation module is further configured to:
and extracting the characteristic information of the region image by adopting a cascade pooling module in a preset model corresponding to the preset category of the region image, and obtaining an evaluation result of the region image based on the characteristic information of the region image.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, the food image is segmented through the image segmentation model to obtain a plurality of segmented images, and then the corresponding preset model is adopted for fine-grained evaluation of each segmented image, so that the evaluation accuracy of each segmented image is higher, and the accuracy of the overall detection of the food image is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow chart of a food quality detection method according to an embodiment of the present disclosure
FIG. 2 illustrates an exemplary schematic diagram of food quality detection of an embodiment of the present disclosure;
fig. 3a illustrates an exemplary schematic diagram of a pizza image provided by an embodiment of the present disclosure;
FIG. 3b shows a schematic view of the pizza shown in FIG. 3a after slicing;
FIG. 4 illustrates an exemplary diagram of cosine similarity;
fig. 5 is a schematic diagram illustrating an architecture of a food quality detection system provided by an embodiment of the present disclosure;
fig. 6 shows an interactive flowchart of a food quality detection method provided by an embodiment of the present disclosure;
fig. 7 shows an interactive flowchart of a food quality detection method provided by an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a food quality detection device provided in an embodiment of the present disclosure;
FIG. 9 shows a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure;
fig. 10 shows a block diagram of an electronic device 1900 according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of a food quality detection method according to an embodiment of the present disclosure, which includes, as shown in fig. 1:
and step S11, carrying out region segmentation on the food image to be detected by adopting the image segmentation model to obtain a plurality of region images.
Step S12, for each of the area images, evaluating the area image by using a preset model corresponding to the area type of the area image, to obtain an evaluation result of the area image.
And step S13, generating a detection result of the food image according to the evaluation result of each region image, wherein the detection result is used for indicating the quality of the food.
In a possible implementation manner, the food quality detection method may be performed by an electronic device such as an edge device or a server, where the edge device may be a User Equipment (UE), a mobile device, a User terminal, a Personal Digital Assistant (PDA), a handheld device, a computing device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server.
In the embodiment of the disclosure, the food image is segmented through the image segmentation model to obtain a plurality of segmented images, and then the corresponding preset model is adopted for fine-grained evaluation of each segmented image, so that the evaluation accuracy of each segmented image is higher, and the accuracy of the overall detection of the food image is improved.
In step S11, the food item image to be detected may be any one of the food item images. The food in the food image may be any food, such as pizza, cake, bread, or pasta. In a possible implementation manner, the electronic device may acquire the food image to be detected through an image acquisition device configured by the electronic device itself or an image acquisition device externally connected in a wired manner or a wireless manner. The image acquisition device can be a camera, a camera or monitoring equipment.
In one example, the image capturing device may capture an image of the food item in case a moving object is detected, and then the electronic device performs step S11. For example, images acquired by the camera according to different frame rates are calculated and compared according to a certain algorithm, when a picture is changed (for example, food is placed or taken away, a lens is moved, and the like), a number obtained by a calculation result exceeds a certain threshold, and at this time, the image acquisition device can determine that the moving object is detected, and then acquire the food images. Food image acquisition is carried out through detecting the moving object, unmanned automatic monitoring can be realized, the monitoring cost of using manpower sparingly improves the accuracy. In the embodiment of the present disclosure, the method for detecting the moving object by the image capturing device is not limited.
In one example, after the image acquisition device acquires the food image, the image acquisition device can extract the feature information of the currently acquired food image and determine the similarity between the feature information of the currently acquired food image and the feature information of the previously acquired food image; and in a case where the similarity is less than or equal to the preset threshold, performing, by the electronic device, step S11.
The preset threshold may be set as needed, and may be, for example, 90%, 95%, or the like. The similarity between the feature information of the currently acquired food image and the feature information of the previously acquired food image may be a cosine similarity between the feature information of the currently acquired food image and the feature information of the previously acquired food image. Under the condition that the similarity is greater than the preset threshold value, the probability that the currently acquired food image and the previously acquired food image are the same food image is high, and the food quality detection is not required to be performed again at the moment, so that the currently acquired food image can be discarded, namely, the food quality detection is not performed on the currently acquired food image. In the case that the similarity is less than or equal to the preset threshold, the probability that the currently captured food image and the previously captured food image are the same food image is small, and at this time, the quality of the food needs to be detected, so that the electronic device may perform step S11 and the like. In the embodiment of the disclosure, through comparison of the filtering similarity, the problem of repeated calculation can be avoided to a great extent, the calculation amount is saved, and the detection accuracy is improved. It should be noted that, the image acquisition device may extract feature information of a currently acquired food image and feature information of a previously acquired food image through a convolutional neural network, and in the embodiment of the present disclosure, the convolutional neural network used for extracting the feature information is not limited.
Fig. 3a illustrates an exemplary schematic diagram of a pizza image provided by an embodiment of the present disclosure. When the chef places the pizza at the meal outlet, the image acquisition device detects the moving object and acquires a first pizza image. Thereafter, the attendant slices the pizza. Fig. 3b shows a schematic view of the pizza shown in fig. 3a after slicing. When the waiter cuts the pizza, the image acquisition device also detects the moving object and acquires a second pizza image. The image acquisition device extracts the characteristic information of the first pizza image and the characteristic information of the second pizza image through a convolutional neural network, and calculates the cosine similarity of the characteristic information of the first pizza image and the characteristic information of the second pizza image. Fig. 4 shows an exemplary diagram of cosine similarity. As shown in fig. 4, the closer the cosine phase angle is to 0 °, i.e., the closer the cosine similarity is to 1, the more similar the feature information of the first pizza image and the second pizza image is.
In step S11, the image segmentation model may be used to segment the food product image into a plurality of region images. The food image is divided into a plurality of regions according to image feature information such as gray scale, color, texture, shape and the like, so that the feature information presents similarity in the same region and presents obvious difference in different regions. The type of the segmented region image is related to the food image to be detected, and can be set according to requirements. Taking pizza as an example, the image segmentation model can be used to segment the food image into a region image corresponding to the cake edge, a region image corresponding to cheese, and a region image corresponding to filling. Taking an egg tart as an example, the image segmentation model can segment a food image into an area image corresponding to an egg tart skin and an area image corresponding to an egg tart heart.
In one possible implementation, the image segmentation model is a neural network model. In one example, the image segmentation model includes, but is not limited to, a full Convolution Network model (FCN) and a semantic segmentation model (SegNet). The FCN adopts a convolution layer and a pooling layer to replace a full connection layer in a classification network, so that an image segmentation model can adapt to segmentation tasks at a pixel level, and meanwhile end-to-end training can be realized. The embodiment of the present disclosure does not limit the network structure of the image segmentation model.
In one possible implementation, the input of the image segmentation model is a food image, and the output of the image segmentation model is a region image of the same size as the input food image. In a possible implementation manner, the region image may be represented by the position of the boundary of the region image in the food image, and the value of the pixel point of the region image does not need to be stored additionally, so that the storage space can be saved.
In one possible implementation, the image segmentation model may be trained using a first training set, where the first training set includes a plurality of first training images and each first training label information, and the label information is used to indicate a region class corresponding to each pixel in the food product image. Taking pizza as an example, the region categories include, but are not limited to, pie-sides, cheese, and filling.
In a possible implementation manner, in the process of obtaining the first training set, data expansion may be performed by using a data enhancement technique such as Mixup, so as to increase the number of training images and improve the accuracy of the image segmentation model.
In one possible implementation, the image segmentation model may be trained by a server, and step S11 may be performed by an edge device. And after the image segmentation model is trained, the image segmentation model is issued to the edge device by the server. The edge device can be arranged in a store, and the server can be arranged in the cloud. The edge device and the server can be connected through an intranet. The server can respectively issue the trained image segmentation models to the edge devices. After each edge device receives the trained image segmentation model, the food image can be subjected to region segmentation by adopting the image segmentation model. In one example, after the edge device and the server successfully establish a connection, the server may issue an image segmentation model to the edge device in an initialization stage; the server may also issue the image segmentation model to the edge device when receiving an image segmentation model request from the edge device. In the embodiment of the disclosure, the server trains and distributes the image segmentation models, so that edge devices distributed in each store can adopt the same image segmentation model, thereby ensuring the uniformity of food image processing and being beneficial to realizing quality detection with uniform standard; the edge device performs region segmentation on the food image by adopting the image segmentation model, so that distributed processing of quality detection can be realized, the workload of the server is reduced, and the cost of the server is reduced.
In a possible implementation manner, the image segmentation model may be trained by a server, and step S11 may also be performed by the server, in which case, the edge device in the store may provide the food image to be detected to the server, and the server performs region segmentation on the food image by using the image segmentation model, so that the workload of the edge device may be reduced, the cost of the edge device may be reduced, but the working pressure of the server may be increased, and the data transmission pressure between the edge device and the server may be increased.
In step S12, for each of the plurality of area images, the electronic device may evaluate the area image by using a preset model corresponding to the area type of the area image, so as to obtain an evaluation result of the area image.
Taking pizza as an example, the region categories may include at least one of a pie side, cheese, and filling. Fig. 2 illustrates an exemplary schematic diagram of food quality detection of an embodiment of the present disclosure. In fig. 2, pizza is used as a quality detection target. As shown in fig. 2, after the food image is input to the image segmentation model, three region images can be obtained, and the three region images are classified into a cake edge, cheese and filling.
In one possible implementation, each region category may correspond to one or more preset models. For example, the predetermined pattern of pie-sides may include a pie-side color pattern and/or a pie-side integrity pattern, cheese may correspond to a cheese color pattern, and filling may correspond to a filling dispersion pattern. The color model of the cake edge can be used for determining the baking degree of the pizza, if the color of the cake edge is lighter, it is indicated that the pizza is not baked, and if the color of the cake edge is darker, it is indicated that the pizza is scorched, and the taste of the pizza is not baked or scorched, so that the dining experience of a user is reduced. The cake edge integrity model can be used for determining whether the pizza is complete, if the cake edge is incomplete, the cake edge is possibly damaged manually or covered by stuffing, the look and feel of a user can be influenced, and the dining experience of the user is reduced. The cheese color model can be used for determining the baking degree of the pizza, if the cheese color is lighter, the pizza is possibly not baked, and if the cheese color is darker, the pizza is possibly scorched, and the mouthfeel of the pizza is influenced by the lack of the baking or the scorching, so that the dining experience of a user is reduced. The stuffing dispersion degree model can be used for determining whether stuffing scattering is uniform or not, if the stuffing scattering is not uniform, namely the laughing material is not uniformly scattered in the flour cake area, the taste and the impression can be influenced, and the dining experience of a user is reduced. As shown in fig. 2, the pie-edge color model evaluates the region images of the pie-edge types, the pie-edge integrity type evaluates the region images of the pie-edge types, the cheese color model evaluates the region images of the cheese types, and the filling dispersion model evaluates the region images of the filling types.
As shown in fig. 2, the region images obtained in step S11 may be respectively input into corresponding preset models for evaluation of different dimensions, specifically, the region images of the cake edge type may be input into a cake edge color model and a cake edge integrity model for evaluation of cake edge color and cake edge integrity, the region images of the cheese type may be input into a cheese color model for evaluation of cheese color, and the filling dispersion degree model may be input into a filling dispersion degree model for evaluation of filling dispersion degree.
In one possible implementation, step S12 may include: and under the condition that the area type of the area image is a cake edge, evaluating the area image by adopting the cake edge color model, and determining whether the cake edge color of the food meets a first preset condition.
The first preset condition may be set as required. In one example, the first preset condition may be that the gray of the pie-edge color is greater than the first gray value and less than the second gray value. The first gray value is smaller than the second gray value, for example, the first gray value may be 150, and the second gray value may be 200. When the gray value of the pie-edge color of the food is greater than the first gray value and less than the second gray value, it may be determined that the pie-edge color of the food is normal, and the evaluation result of the region image may be pass. When the gray value of the pie-edge color of the food is less than or equal to the first gray value, it may be determined that the pie-edge color of the food is too light, and the evaluation result of the area image may be failed at this time, and the reason for the failure in evaluation is not cooked. When the gray value of the pie-edge color of the food is greater than or equal to the second gray value, it may be determined that the pie-edge color is too dark, and the evaluation result of the area image may be failed at this time, and the reason for the failed evaluation is scorching. Of course, the first preset condition may also be used for evaluating hue, brightness, and the like, and the first preset condition is not limited in the embodiment of the present disclosure.
In one possible implementation, step S12 may include: and under the condition that the area type of the area image is the cake edge, evaluating the area image by adopting the cake edge integrity model, and determining whether the cake edge integrity of the food meets a second preset condition.
Wherein, the second preset condition can be set according to the requirement. In one example, the second preset condition may be that the cake-edge integrity of the food product is greater than the first integrity. For example, the first degree of completeness may be 95%, 98%, etc. When the cake-edge integrity of the food is greater than the first integrity, indicating that the cake-edge defect is within an acceptable range, the evaluation result of the area image may be pass. When the cake edge integrity of the food is less than or equal to the first integrity, it is indicated that the cake edge defect exceeds the acceptable range, which may cause adverse effects on the dining experience of the user, and at this time, the evaluation result of the area image may be failed, and the reason for failing to evaluate is the cake edge defect. Of course, the second preset condition may also be used for evaluating the shape, the radian, and the like, and the second preset condition is not limited in the embodiment of the present disclosure.
In one possible implementation, step S12 may include: and in the case that the region type of the region image is cheese, evaluating the region image by using the cheese color model to determine whether the cheese color of the food meets a third preset condition.
Wherein, the third preset condition can be set as required. In one example, the third preset condition may be that the gray of the cheese color is greater than the third gray value and less than the fourth gray value. The third gray value is smaller than the fourth gray value, for example, the third gray value may be 120, and the fourth gray value may be 160. When the gray value of the cheese color of the food is greater than the third gray value and less than the fourth gray value, it may be determined that the cheese color of the food is normal, and the evaluation result of the region image may be pass. When the gray value of the cheese color of the food is less than or equal to the third gray value, it may be determined that the cheese color of the food is too light, and the evaluation result of the region image may be failed at this time, and the reason for the failure in evaluation is not cooked. When the gray value of the cake-side color of the food is greater than or equal to the fourth gray value, it may be determined that the cheese color is too dark, and the evaluation result of the region image may be failed at this time, and the reason for the failed evaluation is scorched. Of course, the third preset condition may also be used for evaluating hue, brightness, and the like, and the third preset condition is not limited in the embodiment of the present disclosure.
In one possible implementation, step S12 may include: and under the condition that the region type of the region image is the filling, evaluating the region image by adopting the filling dispersion degree model, and determining whether the filling dispersion degree of the food meets a fourth preset condition.
The fourth preset condition may be set as required. In one example, the fourth predetermined condition may be that the degree of dispersion of the filling is greater than the first degree of dispersion. For example, the first degree of divergence may be 90%, 95%, etc. When the degree of dispersion of the filling is greater than the first degree of dispersion, it can be determined that the filling is dispersed more uniformly, and the evaluation result of the region image can be passed. When the degree of dispersion of the filling is less than or equal to the first degree of dispersion, it can be determined that the filling is not uniformly dispersed, which affects the appearance and taste of the food, and the evaluation result of the region image may be failed at this time.
In step S12, in a possible implementation manner, the preset model includes a cascaded global pooling module, where the cascaded global pooling module is used to extract feature information of the region image, and the cascaded global pooling module includes a global average pooling layer and a global maximum pooling layer.
In a possible implementation manner, the evaluating the region image by using a preset model corresponding to a preset category of the region image, and obtaining an evaluation result of the region image may include: and extracting the characteristic information of the region image by adopting a cascade pooling module in a preset model corresponding to the preset category of the region image, and obtaining an evaluation result of the region image based on the characteristic information of the region image.
In the embodiment of the disclosure, the preset model includes a convolution module and a cascaded global pooling module. The convolution module can be used for preliminarily extracting the feature information of the regional image through convolution processing, the cascade global pooling module is used for further extracting the feature information of the preliminarily extracted regional image through pooling processing, and the evaluation result of the regional image can be obtained based on the feature information further extracted by the cascade global pooling module. In the embodiment of the disclosure, the global average pooling layer and the maximum pooling layer in cascade are adopted to replace the full-connection layer in the related technology, so that the number of parameters can be reduced, the training speed is increased, and overfitting is prevented.
The global average pooling is to add all the feature values of the feature information to average to obtain a numerical value, i.e. the numerical value represents the corresponding feature map. The global maximum pooling is to use the maximum value of all feature values of the feature information to represent the corresponding feature map. Both can reduce the number of channels, reduce the amount of calculation, and prevent overfitting. In the embodiment of the disclosure, the defects are mutually compensated by integrating the advantages of the global average pooling and the global maximum pooling, so that the accuracy of the regional image evaluation can be improved.
In a possible implementation manner, for each region category, a second training set may be used to train the preset model, where the second training set includes a plurality of second training images and label information of each second training image, and the label information is used to indicate whether a corresponding preset condition is satisfied. Taking pizza as an example, the preset conditions include the first preset condition, the second preset condition, the third preset condition or the fourth preset condition.
In a possible implementation manner, in the process of obtaining the second training set, data expansion may be performed by using a data enhancement technique such as Mixup, so as to increase the number of training images and improve the accuracy of the preset model.
In one possible implementation, the preset model may be trained by the server, and step S11 may be performed by the edge device. In another possible implementation, the preset model may be trained by a server, and step S11 may be performed by the server. The training and execution of the preset model may refer to the training and execution of the image segmentation model, and will not be described herein.
In one possible implementation, the training of the image segmentation model involved in step S11 and the preset model involved in step S12 may be independent of each other. In another possible implementation manner, the image segmentation model involved in step S11 and the preset model involved in step S12 are combined into one model, and end-to-end unified training is performed. In the embodiment of the present disclosure, the training modes of the image segmentation model and the preset model are not limited.
In step S13, the electronic device may generate a detection result of the food item image from the evaluation result of each region image. Taking pizza as an example, the detection results of the food product image may include whether the evaluation of the color of the cake edge passes, whether the evaluation of the integrity of the cake edge passes, whether the evaluation of the cheese color passes, and whether the evaluation of the degree of scattering of the filling passes. Based on this, the quality inspector can determine whether the pizza is not cooked or burnt, whether the pizza is damaged, whether the pizza stuffing is spread to the sides of the pizza, whether the pizza stuffing is spread uniformly, etc. The degree of dispersion of the filling may be evaluated from various points, for example, from the degree of dispersion of shrimp meat, the degree of dispersion of mushroom, and the degree of dispersion of ham.
It should be understood that the above is only an exemplary illustration of the preset model provided in the embodiment of the present disclosure, and is not used to limit the preset model provided in the embodiment of the present disclosure, and the preset model provided in the embodiment of the present disclosure may also be a preset model of other region types.
In one possible implementation, step S13 may include: and generating a visual detection result graph of the food image according to the evaluation result of each region image, wherein the visual detection result graph comprises the food image, the position of each region image in the food image and the evaluation result of each region image.
In the embodiment of the disclosure, the evaluation result can be more intuitively seen through visual display.
In one possible implementation, the method further includes: and sending the visual detection result graph to a server.
In the embodiment of the disclosure, the edge device in the store can upload the visual detection result graph to the server so as to be convenient for quality control personnel to view. Quality testing personnel can see the picture according to the visual detection of each store, and control the condition of each store, so that trained manpower and resources are released. Simultaneously, the detection of the unified standard of each store carries out the unified standard and puts on one's head, and the headquarters of being convenient for knows the meal of each store in real time directly links, and the reward punishment of being convenient for.
In one possible implementation, the food product image is transmitted based on printable characters, and the position of each region image in the food product image and the evaluation result of each region image are transmitted based on a standard digital format character string. In one example, the printable characters may be 64 printable characters.
In the embodiment of the disclosure, the transmission cost can be reduced by transmitting the image with the printable characters, and the integrity and readability of the data can be ensured by transmitting other information with the character strings in the standard digital format.
Fig. 5 is a schematic diagram illustrating an architecture of a food quality detection system according to an embodiment of the present disclosure. As shown in fig. 5, the system includes an edge device 31 and a server 32. The edge device 31 may be disposed in a store, and the server 32 may be disposed in the cloud. The edge device 31 and the server 32 may be connected through enterprise content to improve security. The edge device 31 includes an image acquisition module 311 and a food quality detection module 312. Wherein, image acquisition device can be for taking a candid photograph the camera, and food quality detection module 312 can be marginal server.
The image capturing module 311 may be configured to capture an image of the food to be detected. In one possible implementation, the image acquisition module is further configured to: in the event that a moving object is detected, the food product image is acquired. In one possible implementation, the image acquisition module is further configured to: extracting feature information of a currently acquired food image; determining the similarity of the characteristic information and the characteristic information of the previously collected food image; discarding the currently acquired food image under the condition that the similarity is greater than a preset threshold; and under the condition that the similarity is smaller than or equal to a preset threshold value, sending the currently acquired food image to the food quality detection module.
The food quality detection module 312 is configured to perform food quality detection on the food image acquired by the image acquisition module 311 by using the food quality detection method shown in fig. 1, so as to obtain a detection result of the food image, where the detection result is used to indicate the quality of the food. In the embodiment of the application, the image segmentation model and each preset model are made into a network service, and the detection result of the food quality can be obtained through the network service.
The server 32 may be configured to receive and display the detection result of the food image.
As shown in fig. 5, the server 32 may include a device management module 321, an artificial intelligence platform 322, and a food identification record module 323.
In one possible implementation, the server 32 is further configured to: and recording the detection result of the food image and returning a recording completion message to the image acquisition module. In one example, the food identification recording module 323 may be configured to record the detection result of the food image and return a recording completion message to the image capturing module.
Therefore, the system can be convenient for shops to know the quality inspection progress, and is convenient for waiters to take away food at proper time.
In one possible implementation, the server 32 is further configured to: and sending the image segmentation model and the preset model corresponding to each region type to the food quality detection module 312. In one example, the device management module 321 may be configured to send an image segmentation model and a preset model corresponding to each region category to the food quality detection module 312.
Thus, the detection of the food quality can be completed in a store, the calculation amount of the server can be reduced, and the cost of the server can be reduced.
In one possible implementation, the server 32 is further configured to: and adjusting parameters of the image segmentation model and the preset models corresponding to the region types according to the detection result of the food image, and sending the adjusted image segmentation model and the adjusted preset models corresponding to the region types to the food quality detection module. In an example, the device management module 321 may be configured to adjust parameters of the image segmentation model and the preset model corresponding to each region type according to a detection result of the food image, and send the adjusted image segmentation model and the adjusted preset model corresponding to each region type to the food quality detection module.
Therefore, the accuracy of detection can be further improved by adjusting and updating the image segmentation model and the preset model.
In one possible implementation, the artificial intelligence platform 322 may be configured to receive and display the detection result of the food image.
Fig. 6 shows an interactive flowchart of a food quality detection method provided by an embodiment of the present disclosure. The method may be applied to the system shown in fig. 5. As shown in fig. 6, the method includes:
step S401, an image acquisition module acquires a food image to be detected.
In one possible implementation, the image acquisition module acquires the food image when a moving object is detected.
In a possible implementation manner, the image acquisition module is further configured to extract feature information of a currently acquired food image; determining the similarity of the characteristic information and the characteristic information of the previously collected food image; discarding the currently acquired food image under the condition that the similarity is greater than a preset threshold; and under the condition that the similarity is smaller than or equal to a preset threshold value, sending the currently acquired food image to the food quality detection module.
Step S401 may refer to step S11, and will not be described herein.
And S402, the image acquisition module sends the food image to be detected to the food quality detection module.
Step S403, the food quality detection module performs quality detection on the received food image to obtain a detection result of the food image.
Wherein the detection result can be used to indicate the quality of the food product. Step S403 may refer to step S12, and will not be described herein.
Step S404, the food quality detection module sends the detection result of the food image to the server.
In step S405, the server receives and displays the detection result of the food image.
In the embodiment of the disclosure, the food image is segmented through the image segmentation model to obtain a plurality of segmented images, and then the corresponding preset model is adopted for fine-grained evaluation of each segmented image, so that the evaluation accuracy of each segmented image is higher, and the accuracy of the overall detection of the food image is improved.
Fig. 7 shows an interactive flowchart of a food quality detection method provided by an embodiment of the present disclosure. The method may be applied to the system shown in fig. 5. As shown in fig. 7, on the basis of fig. 6, before step S403, the method further includes step S400; step S406 to step S409 are also included after step S405.
In step S400, the server sends the image segmentation model and the preset model corresponding to each region type to the food quality detection module.
In step S406, the server records the detection result of the food image and returns a recording completion message to the image acquisition module.
In step S407, the image capture module, upon receiving the recording completion message, prompts the user that the recording has been completed.
In step S408, the server adjusts parameters of the image segmentation model and the preset model corresponding to each region type according to the detection result of the food image.
In step S409, the server sends the adjusted image segmentation model and the adjusted preset models corresponding to the preset categories to the food quality detection module.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a food quality detection apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the food quality detection methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are not repeated.
Fig. 8 shows a schematic structural diagram of a food quality detection device provided by an embodiment of the present disclosure. As shown in fig. 8, the food quality detection apparatus 600 may include:
the segmentation module 601 is configured to perform region segmentation on a food image to be detected by using an image segmentation model to obtain a plurality of region images;
an evaluation module 602, configured to evaluate, for each of the multiple region images obtained by the segmentation module, the region image by using a preset model corresponding to a region category of the region image, so as to obtain an evaluation result of the region image;
a generating module 603, configured to generate a detection result of the food image according to the evaluation result of each region image, where the detection result is used to indicate the quality of the food.
In one possible implementation, the region categories include: the model comprises at least one of cake edges, cheese and stuffing, wherein the preset model corresponding to the cake edges comprises a cake edge color model and/or a cake edge integrity model, the cheese corresponds to the cheese color model, and the stuffing corresponds to the stuffing dispersity model.
In one possible implementation, the evaluation module is further configured to:
under the condition that the area type of the area image is a cake edge, evaluating the area image by adopting the cake edge color model, and determining whether the cake edge color of the food meets a first preset condition;
and/or the presence of a gas in the gas,
under the condition that the region type of the region image is a cake edge, evaluating the region image by adopting the cake edge integrity model, and determining whether the cake edge integrity of the food meets a second preset condition;
and/or the presence of a gas in the gas,
under the condition that the region type of the region image is cheese, evaluating the region image by adopting the cheese color model to determine whether cheese color of food meets a third preset condition;
and/or the presence of a gas in the gas,
and under the condition that the region type of the region image is the filling, evaluating the region image by adopting the filling dispersion degree model, and determining whether the filling dispersion degree of the food meets a fourth preset condition.
In one possible implementation, the generating module is further configured to:
and generating a visual detection result graph of the food image according to the evaluation result of each region image, wherein the visual detection result graph comprises the food image, the position of each region image in the food image and the evaluation result of each region image.
In one possible implementation, the apparatus further includes:
and the sending module is used for sending the visual detection result graph to a server, wherein the food images are transmitted based on printable characters, and the position of each area image in the food images and the evaluation result of each area image are transmitted based on a standard digital format character string.
In a possible implementation manner, the preset model includes a cascaded global pooling module, the cascaded global pooling module includes a global average pooling layer and a global maximum pooling layer, and the evaluation module is further configured to:
and extracting the characteristic information of the region image by adopting a cascade pooling module in a preset model corresponding to the preset category of the region image, and obtaining an evaluation result of the region image based on the characteristic information of the region image.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 9 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 9, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (Wi-Fi), a second generation mobile communication technology (2G), a third generation mobile communication technology (3G), a fourth generation mobile communication technology (4G), a long term evolution of universal mobile communication technology (LTE), a fifth generation mobile communication technology (5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
Fig. 10 shows a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 10, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (20)

1. A food quality detection method, characterized in that the method comprises:
carrying out region segmentation on a food image to be detected by adopting an image segmentation model to obtain a plurality of region images;
for each regional image in the plurality of regional images, adopting a preset model corresponding to the regional category of the regional image to evaluate the regional image to obtain an evaluation result of the regional image;
and generating a detection result of the food image according to the evaluation result of each region image, wherein the detection result is used for indicating the quality of the food.
2. The method of claim 1, wherein the region categories comprise: the model comprises at least one of cake edges, cheese and stuffing, wherein the preset model corresponding to the cake edges comprises a cake edge color model and/or a cake edge integrity model, the cheese corresponds to the cheese color model, and the stuffing corresponds to the stuffing dispersity model.
3. The method according to claim 2, wherein the evaluating the region image by using the preset model corresponding to the region type of the region image to obtain the evaluation result of the region image comprises:
under the condition that the area type of the area image is a cake edge, evaluating the area image by adopting the cake edge color model, and determining whether the cake edge color of the food meets a first preset condition;
and/or the presence of a gas in the gas,
under the condition that the region type of the region image is a cake edge, evaluating the region image by adopting the cake edge integrity model, and determining whether the cake edge integrity of the food meets a second preset condition;
and/or the presence of a gas in the gas,
under the condition that the region type of the region image is cheese, evaluating the region image by adopting the cheese color model to determine whether cheese color of food meets a third preset condition;
and/or the presence of a gas in the gas,
and under the condition that the region type of the region image is the filling, evaluating the region image by adopting the filling dispersion degree model, and determining whether the filling dispersion degree of the food meets a fourth preset condition.
4. The method according to any one of claims 1 to 3, wherein the generating of the detection result of the food image according to the evaluation result of each region image comprises:
and generating a visual detection result graph of the food image according to the evaluation result of each region image, wherein the visual detection result graph comprises the food image, the position of each region image in the food image and the evaluation result of each region image.
5. The method of claim 4, further comprising:
and sending the visual detection result map to a server, wherein the food image is transmitted based on printable characters, and the position of each area image in the food image and the evaluation result of each area image are transmitted based on a standard digital format character string.
6. The method according to any one of claims 1 to 5, wherein the preset model includes a cascaded global pooling module, the cascaded global pooling module includes a global average pooling layer and a global maximum pooling layer, and the evaluating the region image by using the preset model corresponding to the preset category of the region image to obtain the evaluation result of the region image includes:
and extracting the characteristic information of the region image by adopting a cascade pooling module in a preset model corresponding to the preset category of the region image, and obtaining an evaluation result of the region image based on the characteristic information of the region image.
7. The food quality detection system is characterized by comprising edge equipment and a server, wherein the edge equipment comprises an image acquisition module and a food quality detection module;
the image acquisition module is used for acquiring food images to be detected;
the food quality detection module is used for detecting the food quality of the food image acquired by the image acquisition module through the food quality detection method of any one of claims 1 to 6 to obtain a detection result of the food image, and the detection result is used for indicating the quality of the food;
the server is used for receiving and displaying the detection result of the food image.
8. The system of claim 7, wherein the server is further configured to:
and recording the detection result of the food image and returning a recording completion message to the image acquisition module.
9. The system of claim 7, wherein the server is further configured to:
and sending an image segmentation model and a preset model corresponding to each region type to the food quality detection module.
10. The system of claim 9, wherein the server is further configured to:
and adjusting parameters of the image segmentation model and the preset models corresponding to the region types according to the detection result of the food image, and sending the adjusted image segmentation model and the adjusted preset models corresponding to the region types to the food quality detection module.
11. The system of any one of claims 7 to 10, wherein the image acquisition module is further configured to:
in the event that a moving object is detected, the food product image is acquired.
12. The system of any one of claims 7 to 11, wherein the image acquisition module is further configured to:
extracting feature information of a currently acquired food image;
determining the similarity of the characteristic information and the characteristic information of the previously collected food image;
discarding the currently acquired food image under the condition that the similarity is greater than a preset threshold;
and under the condition that the similarity is smaller than or equal to a preset threshold value, sending the currently acquired food image to the food quality detection module.
13. A food quality detection apparatus, the apparatus comprising:
the segmentation module is used for carrying out region segmentation on the food image to be detected by adopting an image segmentation model to obtain a plurality of region images;
the evaluation module is used for evaluating each regional image in the plurality of regional images obtained by the segmentation module by adopting a preset model corresponding to the regional category of the regional image to obtain an evaluation result of the regional image;
and the generating module is used for generating a detection result of the food image according to the evaluation result of each region image, and the detection result is used for indicating the quality of the food.
14. The apparatus of claim 13, wherein the region categories comprise: the model comprises at least one of cake edges, cheese and stuffing, wherein the preset model corresponding to the cake edges comprises a cake edge color model and/or a cake edge integrity model, the cheese corresponds to the cheese color model, and the stuffing corresponds to the stuffing dispersity model.
15. The apparatus of claim 14, wherein the evaluation module is further configured to:
under the condition that the area type of the area image is a cake edge, evaluating the area image by adopting the cake edge color model, and determining whether the cake edge color of the food meets a first preset condition;
and/or the presence of a gas in the gas,
under the condition that the region type of the region image is a cake edge, evaluating the region image by adopting the cake edge integrity model, and determining whether the cake edge integrity of the food meets a second preset condition;
and/or the presence of a gas in the gas,
under the condition that the region type of the region image is cheese, evaluating the region image by adopting the cheese color model to determine whether cheese color of food meets a third preset condition;
and/or the presence of a gas in the gas,
and under the condition that the region type of the region image is the filling, evaluating the region image by adopting the filling dispersion degree model, and determining whether the filling dispersion degree of the food meets a fourth preset condition.
16. The apparatus of any one of claims 13 to 15, wherein the generating module is further configured to:
and generating a visual detection result graph of the food image according to the evaluation result of each region image, wherein the visual detection result graph comprises the food image, the position of each region image in the food image and the evaluation result of each region image.
17. The apparatus of claim 16, further comprising:
and the sending module is used for sending the visual detection result graph to a server, wherein the food images are transmitted based on printable characters, and the position of each area image in the food images and the evaluation result of each area image are transmitted based on a standard digital format character string.
18. The apparatus according to any one of claims 13 to 17, wherein the preset model includes a cascaded global pooling module, the cascaded global pooling module includes a global average pooling layer and a global maximum pooling layer, and the evaluating module is further configured to:
and extracting the characteristic information of the region image by adopting a cascade pooling module in a preset model corresponding to the preset category of the region image, and obtaining an evaluation result of the region image based on the characteristic information of the region image.
19. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 6.
20. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 6.
CN202111454961.6A 2021-12-01 2021-12-01 Food quality detection method and device, electronic equipment and storage medium Pending CN114187248A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111454961.6A CN114187248A (en) 2021-12-01 2021-12-01 Food quality detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111454961.6A CN114187248A (en) 2021-12-01 2021-12-01 Food quality detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114187248A true CN114187248A (en) 2022-03-15

Family

ID=80603201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111454961.6A Pending CN114187248A (en) 2021-12-01 2021-12-01 Food quality detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114187248A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114943737A (en) * 2022-07-25 2022-08-26 深圳中食匠心食品有限公司 Flaky pastry quality evaluation method and device and readable storage medium
CN115511396A (en) * 2022-11-22 2022-12-23 成都银光软件有限公司 Food management equipment operation monitoring method and system based on data analysis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114943737A (en) * 2022-07-25 2022-08-26 深圳中食匠心食品有限公司 Flaky pastry quality evaluation method and device and readable storage medium
CN115511396A (en) * 2022-11-22 2022-12-23 成都银光软件有限公司 Food management equipment operation monitoring method and system based on data analysis

Similar Documents

Publication Publication Date Title
US20210168108A1 (en) Messaging system with avatar generation
WO2020216054A1 (en) Sight line tracking model training method, and sight line tracking method and device
CN107886032B (en) Terminal device, smart phone, authentication method and system based on face recognition
KR102506341B1 (en) Devices, systems and methods of virtualizing a mirror
US11450051B2 (en) Personalized avatar real-time motion capture
EP3921806A1 (en) Body pose estimation
US11763481B2 (en) Mirror-based augmented reality experience
CN110956061B (en) Action recognition method and device, and driver state analysis method and device
CN114187248A (en) Food quality detection method and device, electronic equipment and storage medium
US11594025B2 (en) Skeletal tracking using previous frames
CN111986076A (en) Image processing method and device, interactive display device and electronic equipment
CN112115894B (en) Training method and device of hand key point detection model and electronic equipment
US20220270265A1 (en) Whole body visual effects
US20230419497A1 (en) Whole body segmentation
US20240070976A1 (en) Object relighting using neural networks
CN113576451A (en) Respiration rate detection method and device, storage medium and electronic equipment
EP3764326A1 (en) Video lighting using depth and virtual lights
WO2024011181A1 (en) Dynamically switching between rgb and ir capture
WO2023154544A1 (en) Interactively defining an object segmentation
CN109509162B (en) Image acquisition method, terminal, storage medium and processor
CN112036487A (en) Image processing method and device, electronic equipment and storage medium
US20240161242A1 (en) Real-time try-on using body landmarks
CN114399703A (en) Object identification method and device, electronic equipment and storage medium
CN117670734A (en) Image restoration method and device, electronic equipment and storage medium
CN115115663A (en) Face image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20220315

Assignee: Baisheng Consultation (Shanghai) Co.,Ltd.

Assignor: Shengdoushi (Shanghai) Technology Development Co.,Ltd.

Contract record no.: X2023310000138

Denomination of invention: Food quality testing methods and devices, electronic equipment, and storage media

License type: Common License

Record date: 20230714

EE01 Entry into force of recordation of patent licensing contract