CN111259805A - Meat detection method, device, equipment and storage medium - Google Patents

Meat detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN111259805A
CN111259805A CN202010050159.XA CN202010050159A CN111259805A CN 111259805 A CN111259805 A CN 111259805A CN 202010050159 A CN202010050159 A CN 202010050159A CN 111259805 A CN111259805 A CN 111259805A
Authority
CN
China
Prior art keywords
meat
detected
image
semantic segmentation
meat detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010050159.XA
Other languages
Chinese (zh)
Inventor
李雅琴
朱明明
袁操
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Polytechnic University
Original Assignee
Wuhan Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Polytechnic University filed Critical Wuhan Polytechnic University
Priority to CN202010050159.XA priority Critical patent/CN111259805A/en
Publication of CN111259805A publication Critical patent/CN111259805A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a meat detection method, which comprises the following steps: collecting a hyperspectral image of a meat image to be detected; carrying out convolution processing on the hyperspectral image by using a full convolution network to obtain a heat map to be tested of the hyperspectral image; performing semantic segmentation processing on the heat map to be detected to obtain a semantic segmentation image to be detected of the heat map to be detected; and carrying out meat detection on the semantic segmentation image to be detected to obtain a meat detection result. The invention also discloses a meat detection device, equipment and a storage medium. The invention replaces artificial detection and improves the meat detection efficiency.

Description

Meat detection method, device, equipment and storage medium
Technical Field
The invention relates to the field of image processing, in particular to a meat detection method, a meat detection device, meat detection equipment and a storage medium.
Background
Meat is an indispensable food in human daily diet and can provide sufficient protein, vitamins and minerals to meet the health needs of human beings. In daily diet, meat such as pork, beef, mutton and the like is popular with consumers due to high protein content. With the development of social economy, people have more and more demand on meat, and detection and sorting of meat also become an important industrial loop. However, the existing meat detection methods still adopt a large number of manual detection methods, which causes a large amount of waste of human resources while having low efficiency.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a meat detection method, a meat detection device, meat detection equipment and a storage medium, and aims to solve the technical problem of low efficiency of artificial meat detection.
In order to achieve the above object, the present invention provides a meat detection method, comprising: collecting a hyperspectral image of a meat image to be detected; carrying out convolution processing on the hyperspectral image by using a full convolution network to obtain a heat map to be tested of the hyperspectral image; performing semantic segmentation processing on the heat map to be detected to obtain a semantic segmentation image to be detected of the heat map to be detected; and carrying out meat detection on the semantic segmentation image to be detected to obtain a meat detection result.
Optionally, after the step of performing convolution processing on the hyperspectral image by using a full convolution network to obtain a heat map of the hyperspectral image, and taking the heat map as a heat map to be detected, the meat detection method further includes: and amplifying the heat map to be detected to be consistent with the size of the hyperspectral image by utilizing a deconvolution network, and executing semantic segmentation processing on the heat map to be detected according to the heat map to be detected with the size consistent with the hyperspectral image to obtain a semantic segmentation image to be detected of the heat map to be detected.
Optionally, the step of performing meat detection on the semantic segmentation image to be detected to obtain a meat detection result specifically includes: matching the semantic segmentation image to be detected with a standard meat image set to obtain a matching result; and acquiring a meat type corresponding to the to-be-detected meat image according to the matching result, and taking the meat type as a meat detection result.
Optionally, the step of performing meat detection on the semantic segmentation image to be detected to obtain a meat detection result specifically includes: dividing the semantic segmentation image to be detected into a plurality of categories of regional subgraphs according to preset rules according to the parameters of each pixel point in the semantic segmentation image to be detected; calculating the ratio of the sum of the areas of the regional subgraphs corresponding to the categories to the area of the semantic segmentation image to be detected as the area ratio of the categories; and judging the meat type corresponding to the meat image to be detected according to the area ratio of each type, and taking the meat type as a meat detection result.
Optionally, the step of performing meat detection on the semantic segmentation image to be detected to obtain a meat detection result specifically includes: and carrying out meat detection on the semantic segmentation image to be detected by utilizing a meat classification model to obtain a meat detection result.
Optionally, before the step of performing meat detection on the semantic segmentation image to be detected by using the meat classification model to obtain a meat detection result, the meat detection method further includes: acquiring a training set formed by classified semantic segmentation training images; and training a preset machine learning model by using the training set to obtain a meat classification model.
Optionally, the step of training a preset machine learning model by using the training set to obtain a meat classification model specifically includes: training the preset machine learning model by using the training set to obtain an intermediate model; acquiring a test set consisting of classified semantic segmentation test images; performing meat detection on the test set by using the intermediate model to obtain a meat detection test result; and adjusting the model parameters of the intermediate model by using the test result and the classification information of the test set to obtain a meat classification model.
In addition, to achieve the above object, the present invention also provides a meat detecting device including: the acquisition module is used for acquiring a hyperspectral image of the meat image to be detected; the convolution processing module is used for carrying out convolution processing on the hyperspectral image by utilizing a full convolution network to obtain a heat map of the hyperspectral image, and the heat map is used as a heat map to be tested; the semantic segmentation module is used for performing semantic segmentation processing on the heat map to be detected to obtain a semantic segmentation image of the heat map to be detected, and the semantic segmentation image is used as a hyperspectral image to be detected; and the detection module is used for carrying out meat detection on the semantic segmentation image to be detected to obtain a meat detection result.
Further, to achieve the above object, the present invention also provides a meat detecting apparatus including: a memory, a processor and a meat detection program stored on the memory and executable on the processor, the meat detection program when executed by the processor implementing the steps of the meat detection method as described above.
In addition, to achieve the above object, the present invention further provides a storage medium having a meat detection program stored thereon, which when executed by a processor, implements the steps of the meat detection method as described above.
According to the meat detection method, the device, the equipment and the storage medium provided by the embodiment of the invention, the hyperspectral image of the meat image to be detected is collected, the hyperspectral image is convoluted by using a full convolution network to obtain the heat map to be detected of the hyperspectral image, the heat map to be detected is subjected to semantic segmentation processing to obtain the semantic segmentation image to be detected of the heat map to be detected, then the meat detection is carried out on the semantic segmentation image to be detected to obtain the meat detection result, so that the meat detection result is replaced by artificial detection, and the meat can be simply, rapidly and accurately detected.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an embodiment of the meat detection method of the present invention;
FIG. 3 is a detailed flowchart of step S208 of the meat detection method of FIG. 2 according to an embodiment of the present invention;
FIG. 4 is a schematic view of another detailed process of step S208 of the meat detection method of FIG. 2 according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart illustrating steps before step S208 in the meat detection method of the embodiment of the present invention shown in FIG. 2;
FIG. 6 is a schematic view of a detailed process of step S504 of the meat detection method of the embodiment of the present invention shown in FIG. 5;
FIG. 7 is a block diagram of the meat detecting device according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a PC, and can also be a mobile terminal device with a display function, such as a smart phone, a tablet computer, an electronic book reader, an MP3(Moving Picture Experts Group Audio Layer III, dynamic video Experts compress standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, dynamic video Experts compress standard Audio Layer 4) player, a portable computer, and the like.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, the memory 1005, which is one type of computer storage medium, may include an operating system, a network communication module, a user interface module, and a meat detection program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to invoke the meat detection program stored in the memory 1005 and perform the following operations: collecting a hyperspectral image of a meat image to be detected; carrying out convolution processing on the hyperspectral image by using a full convolution network to obtain a heat map to be tested of the hyperspectral image; performing semantic segmentation processing on the heat map to be detected to obtain a semantic segmentation image to be detected of the heat map to be detected; and carrying out meat detection on the semantic segmentation image to be detected to obtain a meat detection result.
Alternatively, the processor 1001 may call the meat detection program stored in the memory 1005, and further perform the following operations: after the step of performing convolution processing on the hyperspectral image by using a full convolution network to obtain a heat map of the hyperspectral image, and taking the heat map as a heat map to be detected, the meat detection method further comprises the following steps: and amplifying the heat map to be detected to be consistent with the size of the hyperspectral image by utilizing a deconvolution network, and executing semantic segmentation processing on the heat map to be detected according to the heat map to be detected with the size consistent with the hyperspectral image to obtain a semantic segmentation image to be detected of the heat map to be detected.
Alternatively, the processor 1001 may call the meat detection program stored in the memory 1005, and further perform the following operations: the step of performing meat detection on the semantic segmentation image to be detected to obtain a meat detection result specifically comprises the following steps: matching the semantic segmentation image to be detected with a standard meat image set to obtain a matching result; and acquiring a meat type corresponding to the to-be-detected meat image according to the matching result, and taking the meat type as a meat detection result.
Alternatively, the processor 1001 may call the meat detection program stored in the memory 1005, and further perform the following operations: the step of performing meat detection on the semantic segmentation image to be detected to obtain a meat detection result specifically comprises the following steps: dividing the semantic segmentation image to be detected into a plurality of categories of regional subgraphs according to preset rules according to the parameters of each pixel point in the semantic segmentation image to be detected; calculating the ratio of the sum of the areas of the regional subgraphs corresponding to the categories to the area of the semantic segmentation image to be detected as the area ratio of the categories; and judging the meat type corresponding to the meat image to be detected according to the area ratio of each type, and taking the meat type as a meat detection result.
Alternatively, the processor 1001 may call the meat detection program stored in the memory 1005, and further perform the following operations: the step of performing meat detection on the semantic segmentation image to be detected to obtain a meat detection result specifically comprises the following steps: and carrying out meat detection on the semantic segmentation image to be detected by utilizing a meat classification model to obtain a meat detection result.
Alternatively, the processor 1001 may call the meat detection program stored in the memory 1005, and further perform the following operations: before the step of performing meat detection on the semantic segmentation image to be detected by using the meat classification model to obtain a meat detection result, the meat detection method further comprises the following steps: acquiring a training set formed by classified semantic segmentation training images; and training a preset machine learning model by using the training set to obtain a meat classification model.
Alternatively, the processor 1001 may call the meat detection program stored in the memory 1005, and further perform the following operations: utilize the training set trains preset machine learning model, obtains the step of meat classification model, specifically includes: training the preset machine learning model by using the training set to obtain an intermediate model; acquiring a test set consisting of classified semantic segmentation test images; performing meat detection on the test set by using the intermediate model to obtain a meat detection test result; and adjusting the model parameters of the intermediate model by using the test result and the classification information of the test set to obtain a meat classification model.
Referring to fig. 2, an embodiment of a meat detection method includes:
step S202, collecting a hyperspectral image of a meat image to be detected;
it should be noted that the image of the meat to be measured is obtained by shooting the meat to be measured. After the meat image to be detected is obtained, a series of image processing including cutting, denoising and the like is further carried out on the meat image to be detected, so that a clearer meat image to be detected is obtained. In addition, in order to improve the detection effect, the user can also place the meat to be detected under the single background color by himself or herself to shoot. The background color is preferably white or green.
In this embodiment, the terminal gathers the hyperspectral image of the meat image that awaits measuring. Specifically, the terminal can be used for performing hyperspectral image acquisition on a meat image to be detected through a hyperspectral image acquisition system arranged in the terminal, and can also be used for performing hyperspectral image acquisition on the meat image to be detected through an external hyperspectral image acquisition system in communication connection with the terminal so as to obtain a hyperspectral image of the meat image to be detected.
Step S204, carrying out convolution processing on the hyperspectral image by using a full convolution network to obtain a heat map to be tested of the hyperspectral image;
it should be noted that Full Convolutional Networks (FCN) is an end-to-end image segmentation method, and the network performs pixel-level prediction to directly obtain a thermal image. The FCN classifies images at a pixel level, thereby solving a semantic level image segmentation (semantic segmentation) problem. In this embodiment, the terminal performs convolution and pooling on the hyperspectral image for multiple times by using the full convolution network, after each convolution and pooling, the obtained image becomes smaller and smaller, the resolution ratio becomes lower and lower, and the smallest image obtained after the last convolution and pooling is the to-be-detected heat map of the hyperspectral image. In this embodiment, the number of times that the full convolution network performs convolution and pooling on the hyperspectral image may be preset, and after the preset number of times is reached, the convolution and pooling on the hyperspectral image may be stopped to obtain the heat map to be measured. In other embodiments, the size of the heat map to be measured may be preset, and after the full convolution network is used to perform convolution and pooling on the hyperspectral image, when the obtained image reaches the preset size, the convolution and pooling on the hyperspectral image is stopped, so as to obtain the heat map to be measured. It is understood that one skilled in the art can select an appropriate way to stop the rolling and pooling process according to actual needs, and obtain the required heat map to be tested.
In one embodiment, after the step S204, the meat detection method further includes: and amplifying the heat map to be tested to be consistent with the size of the hyperspectral image by using a deconvolution network, and executing step S206 according to the heat map to be tested with the size consistent with the hyperspectral image.
It should be noted that the FCN can accept an input image of any size, and the deconvolution layer is used to upsample the feature map of the last convolution layer to restore it to the same size as the input image, so that a prediction can be generated for each pixel while preserving spatial information in the original input image, and finally, pixel-by-pixel classification is performed on the upsampled feature map. In this embodiment, after the convolution layer of the FCN is used to perform convolution processing on the hyperspectral image to obtain the feature map with the minimum size in the convolution layer, that is, the heat map to be tested, the deconvolution network is further used to perform upsampling on the heat map to be tested, so that the size of the heat map to be tested is restored to be the same as that of the hyperspectral image. Specifically, the terminal performs multiplication operation on the heat map to be measured output in step S204 by using convolution of the deconvolution network, then performs size reduction according to the corresponding position, step length and filling, and sums the overlapping portions to finally obtain a point-to-point pixel-to-pixel feature map, that is, the size is restored to the heat map to be measured which is consistent with the hyperspectral image. Step S206 is then performed according to the heat map to be measured having dimensions consistent with the hyperspectral image.
Step S206, performing semantic segmentation processing on the heat map to be tested to obtain a semantic segmentation image to be tested of the heat map to be tested;
in this embodiment, the terminal further performs semantic segmentation processing on the obtained heat map to be detected. Specifically, the terminal finely adjusts the heat map to be measured, and fuses deep coarse information and shallow fine information to ensure the accuracy of the spatial position and the accuracy of the edge region segmentation. For example, a 2 × conv7 operation is performed on the conv7 layer, then the pool4 is convolved by 1 × 1, it can be seen that the size of the 2 × conv7 is the same as that of the pool4, then the two are fused, and finally the fused structure is subjected to 16 upsampled prediction, so that 16s upsample is obtained.
And S208, performing meat detection on the semantic segmentation image to be detected to obtain a meat detection result.
It should be noted that the terminal performs meat detection by using the semantic segmentation image to be detected to obtain a meat detection result of the meat to be detected corresponding to the semantic segmentation image. It should be understood that, the colors of the lean meat and the fat meat of the meat are different, and the lean meat and the fat meat in the meat class to be detected are segmented into different colors in the semantic segmentation image to be detected, in this embodiment, the meat class of the meat to be detected can be determined according to the ratio of the different colors in the semantic segmentation image to be detected, so as to obtain the meat detection result. Specifically, the terminal calculates the ratio of each color in the semantic segmentation image to be detected, and obtains meat represented by each color, for example, a first color represents lean meat, a second color represents fat meat, if the ratio of the first color is greater than a first preset ratio, the meat type of the meat to be detected is judged to be lean meat, and a meat detection result is obtained; and if the first color ratio is smaller than the second preset ratio, judging that the meat type of the meat to be detected is fat meat. Wherein 0< second predetermined occupancy < first predetermined occupancy < 100%. In this embodiment, the meat type of the meat to be detected may also be determined according to the distribution of different colors in the semantic segmentation image to be detected, for example, if the distribution of the colors in the semantic segmentation image to be detected has dispersibility, for example, is snowflake-shaped, the meat type of the meat to be detected is determined to be streaky meat, so as to obtain a meat detection result.
In this embodiment, through the hyperspectral image of gathering the meat image that awaits measuring, it is right to utilize full convolution network hyperspectral image carries out convolution, obtains the heat map that awaits measuring of hyperspectral image, it is right the heat map that awaits measuring carries out the semantic segmentation processing, obtains the semantic segmentation image that awaits measuring of heat map, then right the semantic segmentation image that awaits measuring carries out the meat and detects, obtains the meat testing result to replace artificial the detection, can be simply quick accurate detect the meat.
In one embodiment, referring to fig. 3, the step S208 specifically includes:
step S302, matching the semantic segmentation image to be detected with a standard meat image set to obtain a matching result;
it should be noted that this embodiment provides a specific detection method for performing meat detection on a semantic segmentation image to be detected. The standard meat image set comprises standard semantic segmentation images corresponding to various standard meat categories, including but not limited to standard lean meat, standard fat meat, standard streaky pork and the like. And the terminal matches the semantic segmentation image to be detected with the standard meat image set to obtain a matching result. Specifically, the terminal compares the semantic segmentation image to be detected with the standard semantic segmentation images in the standard meat image set one by one in sequence until the standard semantic segmentation image matched with the image features of the semantic segmentation image to be detected is matched, or the comparison is completed. If the standard semantic segmentation image matched with the image features of the semantic segmentation image to be detected is matched, the matched standard semantic segmentation image is used as a matching result, if the standard semantic segmentation image matched with the image features of the semantic segmentation image to be detected is not matched until the comparison is completed, a matching result with failed matching is generated, and the process is ended.
And S304, acquiring a meat type corresponding to the to-be-detected meat image according to the matching result, and taking the meat type as a meat detection result.
Further, if the standard semantic segmentation image matched with the image features of the semantic segmentation image to be detected is matched, the standard meat type corresponding to the matched standard semantic segmentation image is obtained, the standard meat type is the meat type corresponding to the semantic segmentation image to be detected, and the meat type is used as a meat detection result.
In the embodiment, the semantic segmentation image to be detected is matched with the standard meat image set to obtain the matching result, the meat type corresponding to the meat image to be detected is obtained according to the matching result, and the meat type is used as the meat detection result, so that the detection result can be simply, conveniently and quickly obtained.
In one embodiment, referring to fig. 4, the step S208 specifically includes:
step S402, dividing the semantic segmentation image to be detected into a plurality of classified regional subgraphs according to preset rules according to the parameters of each pixel point in the semantic segmentation image to be detected;
it should be noted that this embodiment provides another specific detection method for performing meat detection on the semantic segmentation image to be detected. The parameters of the pixel include, but are not limited to, rgb (redgreenblue) values of the pixel. The preset rule may be, for example, to classify the pixel points with RGB values in the first range into a first category, and to classify the pixel points with RGB values in the first range into a second category. And the terminal divides the semantic segmentation image to be detected into a plurality of categories of regional subgraphs according to preset rules and the parameters of all pixel points in the semantic segmentation image to be detected. The regional subgraphs of each category may be continuous in a plane, i.e. an overall graph, or may be a plurality of regions which are spaced.
Step S404, calculating the ratio of the sum of the areas of the regional subgraphs corresponding to the categories to the area of the semantic segmentation image to be detected as the area ratio of the categories;
in this embodiment, the terminal calculates the sum of the areas of the region subgraphs corresponding to each category, and calculates the area of the semantic segmentation image to be detected. Specifically, the terminal can correspondingly calculate the area of the image according to the number of the pixel points. And then calculating the ratio of the sum of the areas of the regional subgraphs corresponding to each category to the area of the semantic segmentation image to be detected as the area ratio of each category. For example, the sum of the areas of the regional subgraphs of the first category is S1, the sum of the areas of the regional subgraphs of the second category is S2, and the area of the semantic segmentation image to be detected is S, the area occupation ratio of the first category is S1/S, and the area occupation ratio of the second category is S2/S.
And S406, judging the meat type corresponding to the meat image to be detected according to the area ratio of each type, and taking the meat type as a meat detection result.
Further, the terminal judges the meat type corresponding to the meat image to be detected according to the area ratio of each type. Specifically, the terminal can judge the meat type corresponding to the meat image to be detected by comparing the size of each type of ratio, calculating the difference of the area ratios of each type, and comparing the difference of the area ratios with a preset value. For example, if the area ratio of the first category is greater than that of the second category, and the difference between the area ratios of the first category and the second category is greater than a preset value, the meat category corresponding to the meat image to be detected is judged to be the first category; if the area ratio of the first category is smaller than that of the second category and the difference between the area ratios of the first category and the second category is larger than a preset value, judging that the meat category corresponding to the meat image to be detected is the second category; and if the difference of the area occupation ratios of the first category and the second category is smaller than a preset value, judging that the meat category corresponding to the meat image to be detected is alternate between the first category and the second category.
In this embodiment, the semantic segmentation image to be detected is divided into a plurality of categories of regional subgraphs according to the parameters of each pixel point in the semantic segmentation image to be detected and according to the preset rule, the ratio of the sum of the areas of the regional subgraphs corresponding to each category to the area of the semantic segmentation image to be detected is calculated and used as the area ratio of each category, then the meat category corresponding to the meat image to be detected is judged according to the area ratio of each category, and the meat category is used as the meat detection result, so that the meat category is detected more accurately.
In one embodiment, the step S208 specifically includes: and carrying out meat detection on the semantic segmentation image to be detected by utilizing a meat classification model to obtain a meat detection result.
It should be noted that the meat classification model is a machine learning model trained based on big data in advance, and the meat detection is performed on the semantic segmentation image to be detected by using the meat classification model in the embodiment, so that the meat detection result can be obtained more quickly and accurately.
In one embodiment, referring to fig. 5, before the step S208, the meat detection method further includes:
step S502, acquiring a training set formed by classified semantic segmentation training images;
it should be noted that the classified semantic segmentation training images include semantic segmentation images corresponding to a plurality of meat categories. The number of semantic segmentation training images in the training set can be selected according to actual needs.
And S504, training a preset machine learning model by using the training set to obtain a meat classification model.
In this embodiment, the preset machine learning model may be a neural network model, a bayesian model, or a mahalanobis distance model, and the user may select a corresponding model as the machine learning model according to actual needs. The terminal utilizes the training set to train the preset machine learning model, and in multiple training, the preset machine learning model continuously learns the training set, so that the meat classification model is obtained.
In one embodiment, referring to fig. 6, the step S504 specifically includes:
step S602, training the preset machine learning model by using the training set to obtain an intermediate model;
it should be noted that the terminal obtains the intermediate model after performing multiple training on the preset machine learning model by using the training set.
Step S604, obtaining a test set formed by classified semantic segmentation test images;
further, the terminal divides the test set formed by the classified semanteme and divides the test image. It should be noted that the test set may be identical to the training set, or may be different from the training set.
Step S606, meat detection is carried out on the test set by using the intermediate model, and a test result of the meat detection is obtained;
in this embodiment, the terminal further inputs the test set into the intermediate model, so that the intermediate model performs meat detection to obtain a meat detection test result. In this embodiment, the test result intermediate model is a classification result obtained by classifying the test set.
And step S608, adjusting the model parameters of the intermediate model by using the test result and the classification information of the test set to obtain a meat classification model.
It should be understood that, in the embodiment, the classification result of the semantic segmentation test image included in the test set is known, the terminal compares the known classification result of the test set with the test result output by the intermediate model, and determines a misjudgment rate of the test result, if the misjudgment rate is less than or equal to a preset misjudgment rate, the intermediate model is used as a final meat classification model, and if the misjudgment rate is greater than the preset misjudgment rate, the model parameters of the intermediate model are adjusted to obtain a more accurate meat classification model.
In this embodiment, utilize meat classification model to carry out the meat detection to the semantic segmentation image that awaits measuring, can be more quick accurate obtain meat testing result to provide the training process to meat classification model, the meat classification model's that obtains categorised accuracy is high.
Referring to FIG. 7, an embodiment of a meat detection device includes:
the acquisition module 710 is used for acquiring a hyperspectral image of the meat image to be detected;
a convolution processing module 720, configured to perform convolution processing on the hyperspectral image by using a full convolution network to obtain a heat map of the hyperspectral image, where the heat map is used as a heat map to be tested;
the semantic segmentation module 730 is used for performing semantic segmentation processing on the heat map to be detected to obtain a semantic segmentation image of the heat map to be detected, and the semantic segmentation image is used as a hyperspectral image to be detected;
and the detection module 740 is configured to perform meat detection on the semantic segmentation image to be detected to obtain a meat detection result.
In this embodiment, through the hyperspectral image of gathering the meat image that awaits measuring, it is right to utilize full convolution network hyperspectral image carries out convolution, obtains the heat map that awaits measuring of hyperspectral image, it is right the heat map that awaits measuring carries out the semantic segmentation processing, obtains the semantic segmentation image that awaits measuring of heat map, then right the semantic segmentation image that awaits measuring carries out the meat and detects, obtains the meat testing result to replace artificial the detection, can be simply quick accurate detect the meat.
Optionally, the convolution processing module 720 is further configured to enlarge the heat map to be measured to be consistent with the size of the hyperspectral image by using a deconvolution network.
Optionally, the detecting module 740 is further configured to match the semantic segmentation image to be detected with a standard meat image set, so as to obtain a matching result; and acquiring a meat type corresponding to the to-be-detected meat image according to the matching result, and taking the meat type as a meat detection result.
Optionally, the detection module 740 is further configured to divide the semantic segmentation image to be detected into a plurality of categories of regional subgraphs according to a preset rule according to the parameter of each pixel point in the semantic segmentation image to be detected; calculating the ratio of the sum of the areas of the regional subgraphs corresponding to the categories to the area of the semantic segmentation image to be detected as the area ratio of the categories; and judging the meat type corresponding to the meat image to be detected according to the area ratio of each type, and taking the meat type as a meat detection result.
Optionally, the detecting module 740 is further configured to perform meat detection on the to-be-detected semantic segmentation image by using a meat classification model, so as to obtain a meat detection result.
Optionally, the model training module is configured to obtain a training set formed by the classified semantic segmentation training images; and training a preset machine learning model by using the training set to obtain a meat classification model.
Optionally, training the preset machine learning model by using the training set to obtain an intermediate model; acquiring a test set consisting of classified semantic segmentation test images; performing meat detection on the test set by using the intermediate model to obtain a meat detection test result; and adjusting the model parameters of the intermediate model by using the test result and the classification information of the test set to obtain a meat classification model.
In addition, an embodiment of the present invention further provides a meat detection apparatus, where the meat detection apparatus includes: a memory, a processor and a meat detection program stored on the memory and executable on the processor, the meat detection program when executed by the processor implementing the steps of the meat detection method as described above.
In addition, an embodiment of the present invention further provides a storage medium, where the storage medium stores a meat detection program, and the meat detection program, when executed by a processor, implements the steps of the meat detection method as described above.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A meat detection method, characterized in that the meat detection method comprises the following steps:
collecting a hyperspectral image of a meat image to be detected;
carrying out convolution processing on the hyperspectral image by using a full convolution network to obtain a heat map to be tested of the hyperspectral image;
performing semantic segmentation processing on the heat map to be detected to obtain a semantic segmentation image to be detected of the heat map to be detected;
and carrying out meat detection on the semantic segmentation image to be detected to obtain a meat detection result.
2. The meat detection method of claim 1 wherein after the step of convolving the hyperspectral image with a full convolution network to obtain a heat map of the hyperspectral image as the heat map to be tested, the meat detection method further comprises:
and amplifying the heat map to be detected to be consistent with the size of the hyperspectral image by utilizing a deconvolution network, and executing semantic segmentation processing on the heat map to be detected according to the heat map to be detected with the size consistent with the hyperspectral image to obtain a semantic segmentation image to be detected of the heat map to be detected.
3. The meat detection method of claim 1, wherein the step of performing meat detection on the semantic segmentation image to be detected to obtain a meat detection result specifically comprises:
matching the semantic segmentation image to be detected with a standard meat image set to obtain a matching result;
and acquiring a meat type corresponding to the to-be-detected meat image according to the matching result, and taking the meat type as a meat detection result.
4. The meat detection method of claim 1, wherein the step of performing meat detection on the semantic segmentation image to be detected to obtain a meat detection result specifically comprises:
dividing the semantic segmentation image to be detected into a plurality of categories of regional subgraphs according to preset rules according to the parameters of each pixel point in the semantic segmentation image to be detected;
calculating the ratio of the sum of the areas of the regional subgraphs corresponding to the categories to the area of the semantic segmentation image to be detected as the area ratio of the categories;
and judging the meat type corresponding to the meat image to be detected according to the area ratio of each type, and taking the meat type as a meat detection result.
5. The meat detection method of claim 1, wherein the step of performing meat detection on the semantic segmentation image to be detected to obtain a meat detection result specifically comprises:
and carrying out meat detection on the semantic segmentation image to be detected by utilizing a meat classification model to obtain a meat detection result.
6. The meat detection method of claim 5, wherein before the step of performing meat detection on the semantic segmentation image to be detected by using the meat classification model to obtain a meat detection result, the meat detection method further comprises:
acquiring a training set formed by classified semantic segmentation training images;
and training a preset machine learning model by using the training set to obtain a meat classification model.
7. The meat detection method of claim 6, wherein said step of training a predetermined machine learning model using said training set to obtain a meat classification model specifically comprises:
training the preset machine learning model by using the training set to obtain an intermediate model;
acquiring a test set consisting of classified semantic segmentation test images;
performing meat detection on the test set by using the intermediate model to obtain a meat detection test result;
and adjusting the model parameters of the intermediate model by using the test result and the classification information of the test set to obtain a meat classification model.
8. A meat detection device, comprising:
the acquisition module is used for acquiring a hyperspectral image of the meat image to be detected;
the convolution processing module is used for carrying out convolution processing on the hyperspectral image by utilizing a full convolution network to obtain a heat map of the hyperspectral image, and the heat map is used as a heat map to be tested;
the semantic segmentation module is used for performing semantic segmentation processing on the heat map to be detected to obtain a semantic segmentation image of the heat map to be detected, and the semantic segmentation image is used as a hyperspectral image to be detected;
and the detection module is used for carrying out meat detection on the semantic segmentation image to be detected to obtain a meat detection result.
9. A meat detection device, comprising: a memory, a processor, and a meat detection program stored on the memory and executable on the processor, the meat detection program when executed by the processor implementing the steps of the meat detection method of any of claims 1 to 7.
10. A storage medium having stored thereon a meat detection program which, when executed by a processor, carries out the steps of the meat detection method of any one of claims 1 to 7.
CN202010050159.XA 2020-01-16 2020-01-16 Meat detection method, device, equipment and storage medium Pending CN111259805A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010050159.XA CN111259805A (en) 2020-01-16 2020-01-16 Meat detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010050159.XA CN111259805A (en) 2020-01-16 2020-01-16 Meat detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111259805A true CN111259805A (en) 2020-06-09

Family

ID=70950599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010050159.XA Pending CN111259805A (en) 2020-01-16 2020-01-16 Meat detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111259805A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424159A (en) * 2017-07-28 2017-12-01 西安电子科技大学 Image, semantic dividing method based on super-pixel edge and full convolutional network
CN108198188A (en) * 2017-12-28 2018-06-22 北京奇虎科技有限公司 Food nutrition analysis method, device and computing device based on picture
WO2018165605A1 (en) * 2017-03-09 2018-09-13 Northwestern University Hyperspectral imaging sensor
CN110163293A (en) * 2019-05-28 2019-08-23 武汉轻工大学 Red meat classification method, device, equipment and storage medium based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018165605A1 (en) * 2017-03-09 2018-09-13 Northwestern University Hyperspectral imaging sensor
CN107424159A (en) * 2017-07-28 2017-12-01 西安电子科技大学 Image, semantic dividing method based on super-pixel edge and full convolutional network
CN108198188A (en) * 2017-12-28 2018-06-22 北京奇虎科技有限公司 Food nutrition analysis method, device and computing device based on picture
CN110163293A (en) * 2019-05-28 2019-08-23 武汉轻工大学 Red meat classification method, device, equipment and storage medium based on deep learning

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
HONGBING XIAO: "Research on the Method of Hyperspectral and Image Deep Features for Bacon Classification", pages 1 - 5 *
I. MUÑOZ: "Computer image analysis for intramuscular fat segmentation in dry-cured ham slices using convolutional neural networks", pages 65 - 10 *
LEI ZHOU,: "Application of Deep Learning in Food: A Review", pages 1 - 19 *
MAHMOUD AL-SARAYREH: "Deep Spectral-spatial Features of Snapshot Hyperspectral Images for Red-meat Classification", pages 1 - 6 *
MAHMOUD AL-SARAYREH: "Detection of Red-Meat Adulteration by Deep Spectral–Spatial Features in Hyperspectral Images", pages 1 - 20 *
SIMON MEZGEC: "Mixed deep learning and natural language processing method for fake-food image recognition and standardization to help automated dietary assessment", pages 1193 *
张梦芸: "基于光谱图像技术的蓝莓瘀伤检测研究", pages 63 - 78 *
杨露菁,吉文阳,郝卓楠,李翀伦,吴俊锋: "《Python 网络数据爬取及分析从入门到精通 分析篇》", vol. 1, 北京:北京航空航天大学出版社, pages: 146 - 147 *
王九清: "基于卷积神经网络与高光谱的鸡肉品质分类检测", pages 1 - 6 *

Similar Documents

Publication Publication Date Title
CN108121984B (en) Character recognition method and device
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
CN109583483B (en) Target detection method and system based on convolutional neural network
US10366300B1 (en) Systems and methods regarding 2D image and 3D image ensemble prediction models
US9305208B2 (en) System and method for recognizing offensive images
US11670097B2 (en) Systems and methods for 3D image distification
CN107798653A (en) A kind of method of image procossing and a kind of device
CN110738125A (en) Method, device and storage medium for selecting detection frame by using Mask R-CNN
CN110070551B (en) Video image rendering method and device and electronic equipment
CN110084204B (en) Image processing method and device based on target object posture and electronic equipment
CN105046254A (en) Character recognition method and apparatus
CN111914843B (en) Character detection method, system, equipment and storage medium
CN112767366A (en) Image recognition method, device and equipment based on deep learning and storage medium
CN103353881B (en) Method and device for searching application
CN111985465A (en) Text recognition method, device, equipment and storage medium
CN111507324A (en) Card frame identification method, device, equipment and computer storage medium
CN110674873A (en) Image classification method and device, mobile terminal and storage medium
CN112766045A (en) Scene change detection method, system, electronic device and storage medium
CN111598084A (en) Defect segmentation network training method, device and equipment and readable storage medium
CN113191235A (en) Sundry detection method, device, equipment and storage medium
CN110458004B (en) Target object identification method, device, equipment and storage medium
CN111259805A (en) Meat detection method, device, equipment and storage medium
CN111179287A (en) Portrait instance segmentation method, device, equipment and storage medium
CN114067275A (en) Target object reminding method and system in monitoring scene and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination