CN117975444B - Food material image recognition method for food crusher - Google Patents
Food material image recognition method for food crusher Download PDFInfo
- Publication number
- CN117975444B CN117975444B CN202410362374.1A CN202410362374A CN117975444B CN 117975444 B CN117975444 B CN 117975444B CN 202410362374 A CN202410362374 A CN 202410362374A CN 117975444 B CN117975444 B CN 117975444B
- Authority
- CN
- China
- Prior art keywords
- target point
- point
- representing
- local area
- food
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 235000013305 food Nutrition 0.000 title claims abstract description 138
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000013528 artificial neural network Methods 0.000 claims abstract description 9
- 230000008859 change Effects 0.000 claims description 26
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 5
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 240000005561 Musa balbisiana Species 0.000 description 1
- 235000018290 Musa x paradisiaca Nutrition 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/68—Food, e.g. fruit or vegetables
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image data processing, in particular to a food material image identification method for a food crusher, which comprises the following steps: obtaining food gray level images in a food crusher, marking any pixel point in the food gray level images as a target point, obtaining the density distribution expression degree of the target point according to the gray level value difference of the pixel points around the target point and the gray level value difference of the pixel points of the target point in multiple directions, obtaining the texture feature expression degree of the target point according to the gradient value difference of the pixel points of the target point in multiple directions, further obtaining an enhanced image of the food gray level images, and identifying the enhanced image of the food gray level images through a trained CNN (computer numerical network) to obtain the food type. The invention promotes the food material information transmitted from different areas, and is beneficial to the neural network to identify the type of the food material more accurately.
Description
Technical Field
The invention relates to the technical field of image data processing, in particular to a food material image recognition method for a food crusher.
Background
With the continuous progress of artificial intelligence and machine learning technologies, image recognition technology has been widely used in the field of food processing. By utilizing a deep learning algorithm, the food image can be rapidly and accurately identified, and technical support is provided for automatic control of the food crusher. The growing demand of the food market for product diversity and individualization has led food processing enterprises to handle different types and shapes of food materials. Therefore, the crusher needs to have the capability of efficiently processing a variety of food materials.
The identification of food material images in the food crusher is characterized in that the food material images are affected by light rays, food material types, stacking states and the like, so that the food material images are different in performance in different areas of the images, if the food material images are directly and integrally reinforced through histogram equalization, the problems of excessive reinforcement and the like in partial areas of the food material images are caused, the loss of details is caused, and the follow-up identification of the food material images is not facilitated.
Disclosure of Invention
The invention provides a food material image recognition method for a food crusher, which aims to solve the existing problems.
The invention relates to a food material image identification method for a food crusher, which adopts the following technical scheme:
An embodiment of the present invention provides a food material image recognition method for a food crusher, the method comprising the steps of:
acquiring food material gray level images in a food crusher, and marking any pixel point in the food material gray level images as a target point;
Obtaining the gap expression degree of the target point according to the gray value difference of the pixel points around the target point;
Obtaining brightness variation distribution expression of the target point according to the difference between the gray values of the pixel points of the target point in a plurality of directions;
obtaining the density distribution expression degree of the target point according to the gap expression degree and the brightness change distribution expression of the target point;
Obtaining the texture feature expression degree of the target point according to the differences among pixel point gradient values of the target point in multiple directions;
obtaining an enhanced image of the food material gray level image according to the density distribution expression degree and the texture feature expression degree of the target point;
And identifying the enhanced image of the food material gray level image through the trained CNN neural network to obtain the food material type.
Further, the step of obtaining the gap expression degree of the target point according to the gray value difference of the pixel points around the target point comprises the following specific steps:
In the food material gray level image, a target point is taken as the center to construct a food material gray level image with the size of In which,/>The side length of the preset local area is set; front/>, which minimizes gray values in local regionsEach pixel point is marked as a reference point,/>Is a preset quantity threshold;
In the local area, the rectangular area formed by the reference points is marked as a first layer rectangular expansion area, the rectangular area formed by all adjacent pixel points of the reference points is marked as a second layer rectangular expansion area, and the like, M layers of rectangular expansion areas of the reference points are obtained, The number of layers of the preset rectangular expansion area is a threshold value;
And obtaining the gap expression degree of the target point according to the Euclidean distance between the target point and the reference point and the gray value of the pixel point in each layer of rectangular expansion area of the reference point.
Further, according to the euclidean distance between the target point and the reference point and the gray value of the pixel point in each layer of rectangular extension area of the reference point, the gap expression degree of the target point is obtained, and the corresponding specific formula is as follows:
Wherein, Representing the gap expression degree of the target point; /(I)Also the number of reference points; /(I)Representing the target point and the/>Euclidean distance between the reference points; /(I)Represents the/>Gray values of the reference points; /(I)Represents the/>First/>, of the reference pointsAverage gray values of all pixel points in the rectangular extension area of the layer; /(I)Represents the/>First/>, of the reference pointsStandard deviation of gray values of all pixel points in the layer rectangle expansion area; /(I)An exponential function that is based on a natural constant; /(I)As a function of absolute value.
Further, the obtaining the brightness variation distribution of the target point according to the difference between the gray values of the pixel points of the target point in a plurality of directions comprises the following specific steps:
The direction of the eight neighborhood of the target point is recorded as the secondary direction of the target point; the opposite secondary directions of the target point are marked as a group of collinear directions; and obtaining brightness variation distribution expression of the target point according to the gray values of the pixel points in the two secondary directions in the collinear direction.
Further, according to the gray values of the pixel points in the two secondary directions in the collinear direction, the brightness variation distribution expression of the target point is obtained, and the corresponding specific formula is as follows:
Wherein, A brightness change distribution representation representing the target point; /(I)Representing the number of collinear directions; /(I)Representing the number of pixels in each sub-direction from the target point in the local area; /(I)Representing the position along the/>, starting from the target point, within the local areaFirst/>, in the 1 st one of the collinear directionsGray values of the individual pixels; /(I)Representing the position along the/>, starting from the target point, within the local areaFirst/>, in the 1 st one of the collinear directionsGray values of the individual pixels; /(I)Representing the position along the/>, starting from the target point, within the local areaFirst/>, second order in the collinear directionGray values of the individual pixels; /(I)Representing the position along the/>, starting from the target point, within the local areaFirst/>, second order in the collinear directionGray values of the individual pixels; /(I)For/>Belonging to the interval/>All/>, inner timeStandard deviation of (2); /(I)For/>Belonging to the interval/>All/>, inner timeStandard deviation of (2); /(I)For/>Belonging to the interval/>All/>, inner timeMaximum value of (2); /(I)As a function of absolute value.
Further, the step of obtaining the density distribution expression level of the target point according to the gap expression level and the brightness variation distribution expression of the target point includes the following specific steps:
And (3) recording the ratio of the brightness change distribution expression of the target point to the gap expression degree of the target point as the density distribution expression degree of the target point.
Further, the obtaining the texture feature expression degree of the target point according to the differences among the gradient values of the pixel points of the target point in a plurality of directions comprises the following specific steps:
Using a sobel operator to obtain a gradient value of each pixel point in the food material gray level image;
Calculating the position from the target point in the local area First/>, in the secondary directionThe average value of gradient values of all adjacent pixel points of each pixel point is obtained by starting from a target point in a local area along the first/>First/>, in the secondary directionThe difference between the gradient value of each pixel point and the mean value is recorded as the first/>, starting from the target point, in the local areaFirst/>, in the secondary directionGradient difference of the individual pixels;
And obtaining the texture feature expression degree of the target point according to the difference value of the gradient values of the pixel points in the target point and the local area, the gradient change in each secondary direction and the gradient difference of each pixel point in the local area along each secondary direction from the target point.
Further, according to the difference value of the gradient values of the pixel points in the local area and the target point, the gradient change in each secondary direction, and the gradient difference of each pixel point in the local area along each secondary direction from the target point, the texture feature expression degree of the target point is obtained, and the corresponding specific formula is as follows:
Wherein, Representing the texture feature expression degree of the target point; /(I)Representing the number of pixels present in the local area of the target point; /(I)Representing the number of secondary directions of the target point; /(I)Expressed as the number of pixels in each sub-direction from the target point in the local area; /(I)Representing the target point and the/>, in its local areaDifferences in gradient values of the individual pixels; /(I)Representing the position along the/>, starting from the target point, within the local areaSecond direction/>Average value of gradient values of each pixel point and adjacent pixel points; representing the position along the/>, starting from the target point, within the local area Second direction/>Gradient difference of the individual pixels; for/> Belonging to the interval/>All/>, inner timeIs the minimum value of (a); /(I)As a function of absolute value.
Further, the step of obtaining the enhanced image of the food gray level image according to the density distribution expression level and the texture feature expression level of the target point comprises the following specific steps:
obtaining comprehensive characteristic parameters of the target point according to the density distribution expression degree and the texture characteristic expression degree of the target point;
according to the comprehensive characteristic parameters of each pixel point, use Clustering the food gray level images by a clustering algorithm, and dividing the food gray level images into a plurality of areas to be analyzed;
and respectively carrying out histogram equalization operation on each region to be analyzed divided by the food material gray level image by using a self-adaptive histogram equalization algorithm to obtain an enhanced image of the food material gray level image.
Further, according to the density distribution expression level and the texture feature expression level of the target point, the comprehensive feature parameters of the target point are obtained, and the corresponding specific calculation formula is as follows:
Wherein, Representing the comprehensive characteristic parameters of the target point; /(I)Representing the density distribution expression degree of the target point; /(I)Representing the texture feature expression degree of the target point; /(I)Is a linear normalization function.
The technical scheme of the invention has the beneficial effects that: obtaining food gray level images in a food crusher, marking any pixel point in the food gray level images as a target point, obtaining the gap expression degree of the target point according to the gray level difference of the pixel points around the target point, obtaining the brightness change distribution expression of the target point according to the difference of the pixel point gray levels of the target point in multiple directions, and obtaining the density distribution expression degree of the target point according to the gap expression degree and the brightness change distribution expression of the target point, thereby being beneficial to the subsequent enhancement of the food image; according to the difference between pixel point gradient values of the target point in multiple directions, the texture feature expression degree of the target point is obtained, and the texture and the non-uniformity of the food material surface can be differentiated; according to the density distribution expression degree and the texture feature expression degree of the target point, an enhanced image of the food material gray level image is obtained, so that the characteristics of the food material can be displayed more clearly, and more accurate image data is provided for subsequent food material identification; the enhanced images of the food material gray level images are identified through the trained CNN neural network to obtain the type of the food material, so that the food material information transmitted by different areas is improved, and the neural network is facilitated to identify the type of the food material more accurately.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of steps of a food image recognition method for a food crusher of the present invention;
Fig. 2 is a gray scale image of food materials in the present embodiment;
fig. 3 is a schematic diagram of clustering results of food material gray level images;
FIG. 4 is a food material gray scale image and its gray scale histogram;
Fig. 5 is an enhanced image of a food material gray scale image and its gray scale histogram.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of a specific implementation, structure, characteristics and effects of a food material image recognition method for a food crusher according to the invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of a food material image recognition method for a food crusher provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for identifying food material images for a food crusher according to an embodiment of the present invention is shown, the method includes the following steps:
Step S001: and (3) acquiring food material gray level images in the food crusher, and marking any pixel point in the food material gray level images as a target point.
The food materials in the machine are shot through a camera of the food crusher, food material images in the food crusher are collected, the images are subjected to graying, food material gray level images are obtained, and the collected food material gray level images are shown in fig. 2. Marking any pixel point in the food material gray level image as a target point, wherein the target point is the first pixel point in the food material gray level imageAnd a pixel point.
Step S002: and obtaining the gap expression degree of the target point according to the gray value difference of the pixel points around the target point.
Due to the internal structure of the crusher, the size of the food materials and other factors, the food materials are distributed in different stacks in the crusher, the shot food material images are different gaps in different areas, the smaller the stacking gaps are, the darker the gap parts are, and the brighter the food material parts are, so that the gray level value of the gap parts is smaller than that of the food material parts, and the gray levels in the gap areas are similar.
The side length of the local area preset in the embodimentFor 21, preset quantity threshold/>The number of layers of the preset rectangular extension area is 10, and the number of layers of the preset rectangular extension area is 3, which is described as an example, and other values can be set in other embodiments, which are not limited in this embodiment.
Taking a target point in the food material gray level image as the center to construct a target point with the size ofIs defined in the drawings. In this embodiment, the gray values of all the pixels in the local area are traversed, the pixel with the smallest gray value is screened out, and the front/>, with the smallest gray value, in the local area is obtainedEach pixel point is marked as a reference point,/>Is a preset quantity threshold. By/>For example, the first/>, willThe rectangular area formed by the reference points is marked as a first layer rectangular expansion area, and the first layer rectangular expansion area is recorded as the first layer rectangular expansion areaThe rectangular area formed by all adjacent pixel points of the reference points is marked as a second layer rectangular expansion area, and the first/>All neighboring pixels of the reference point (excluding the/>Reference points), which is marked as a third layer rectangular extension area, and thus, a second/>, is obtainedM-layer rectangular extension areas of the reference points. According to the mode, the M-layer rectangular expansion area of each reference point is obtained.
What needs to be described is: Wherein 9 is a first layer rectangular expansion area, and 1,2, 3, 4,5, 6, 7 and 8 are second layer rectangular expansion areas. The calculation formula of the gap expression degree of the target point is as follows:
Wherein, Representing the gap expression degree of the target point; /(I)For a preset quantity threshold,/>Also the number of reference points; /(I)The number of layers of the preset rectangular expansion area is a threshold value; /(I)Representing the target point and the/>Euclidean distance between the reference points; Represents the/> Gray values of the reference points; /(I)Represents the/>First/>, of the reference pointsAverage gray values of all pixel points in the layer rectangle expansion area; /(I)Represents the/>First/>, of the reference pointsStandard deviation of gray values of all pixel points in the layer rectangle expansion area; /(I)A gap representation representing a reference point; /(I)The gray scale value of the reference point is compared with the average gray scale value of all the pixel points in the rectangular expansion area of each layer of the reference point, the smaller the value is, the closer the value is, the larger the gap is expressed, but because the average gray scale value of all the pixel points in the rectangular expansion area is adopted here, the gray scale value of the reference point is expressed by the formula of/>To determine gray consistency of rectangular extension area,/>The smaller the value, the higher the gradation uniformity, so as to fit the characteristic of similar gradation in the gap region, so/>The smaller and/>Smaller indicates a larger gap performance for the current reference point. /(I)An exponential function that is based on a natural constant; /(I)As a function of absolute value.
So far, the gap expression degree of the target point is obtained.
Step S003: and obtaining brightness change distribution expression of the target point according to the difference between the gray values of the pixel points of the target point in a plurality of directions.
In the present embodiment, the direction of the eight neighborhoods is described as an example, and other directions may be set in other embodiments, and the present embodiment is not limited thereto. The eight neighbors are oriented in up, down, left, right, and four diagonal directions.
Taking the local area corresponding to the target point as an example, the direction of the eight neighborhood of the target point is marked as the secondary direction of the target point; the opposite secondary directions of the target point were noted as a set of collinear directions, yielding 4 sets of collinear directions. The calculation formula of the brightness variation distribution expression of the target point is as follows:
Wherein, A brightness change distribution representation representing the target point; /(I)Representing the number of collinear directions; /(I)Representing the number of pixels in each sub-direction from the target point in the local area; /(I)Representing the position along the/>, starting from the target point, within the local areaFirst/>, in the 1 st one of the collinear directionsGray values of the individual pixels; /(I)Representing the position along the/>, starting from the target point, within the local areaFirst/>, in the 1 st one of the collinear directionsGray values of the individual pixels; /(I)Representing the position along the/>, starting from the target point, within the local areaFirst/>, second order in the collinear directionGray values of the individual pixels; /(I)Representing the position along the/>, starting from the target point, within the local areaFirst/>, second order in the collinear directionGray values of the individual pixels; /(I)As a function of absolute value; /(I)For/>Belonging to the interval/>All/>, inner timeThe smaller the value, the more the standard deviation of the value is, illustrating the position along the/>, starting from the target point in the local areaA certain brightness variation exists in the 1 st direction in the collinear directions; /(I)For/>Belonging to the interval/>All/>, inner timeThe smaller the value, the more the standard deviation of the value is, illustrating the position along the/>, starting from the target point in the local areaA certain brightness variation exists in the 2 nd time direction in the collinear direction; /(I)The closer to 1, the more obvious the brightness variation distribution in the current collinear direction is shown; for/> Belonging to the interval/>All/>, inner timeMaximum value of (2); when the brightness variation distribution representation is analyzed on the surrounding area which is actually influenced by the stacking area, the collinearly directions are influenced by the stacking areas with different degrees, and the presented brightness variation distribution representation is not necessarily relatively consistent, so that the most representative is selected by screening the maximum value.
Thus, the brightness variation distribution of the target point is obtained.
Step S004: and obtaining the density distribution expression degree of the target point according to the gap expression degree and the brightness change distribution expression of the target point.
Through the steps, the gap expression degree and the brightness change distribution expression of each target point can be obtained, and the smaller the gap expression degree is, the larger the density distribution expression degree is; the larger the luminance distribution variation behavior, the greater the density distribution behavior. The calculation formula of the density distribution expression degree of the target point is as follows:
Wherein, Representing the density distribution expression degree of the target point; /(I)A brightness change distribution representation representing the target point; /(I)The gap expression level of the target point is represented.
Thus, the density distribution expression degree of the target point is obtained.
Step S005: and obtaining the texture feature expression degree of the target point according to the differences among the gradient values of the pixel points of the target point in a plurality of directions.
The distribution of the food materials in the crusher not only can form a stacking area due to density distribution, but also can enable different structures and different information of the food materials to be displayed on the surface due to factors such as the size of the food materials, and when the food materials displayed in a certain area range obtain more texture information, namely the different structures and the different information change more complicated, the food material information can be embodied more obviously, and the follow-up identification is more facilitated.
Taking the local area corresponding to the target point as an example, analyzing the texture feature expression degree of the local area of each target point according to the edge features of the food material, and analyzing the gradient change of the pixel points in the local area, wherein the faster the gradient change is, the more complex the corresponding food material information is, and the more thorough the different structural and informative performances of the food material are realized.
Calculating the position from the target point in the local areaFirst/>, in the secondary directionThe average value of gradient values of all adjacent pixel points of each pixel point is obtained by starting from a target point in a local area along the first/>First/>, in the secondary directionThe difference between the gradient value of each pixel point and the mean value is recorded as the first/>, starting from the target point, in the local areaFirst/>, in the secondary directionGradient difference of individual pixels.
And obtaining a gradient value of each pixel point in the food material gray level image by using a sobel operator, wherein the sobel operator is a known technology, and a specific method is not described here. The calculation formula of the texture feature expression degree of the target point is as follows:
Wherein, Representing the texture feature expression degree of the target point; /(I)Representing the number of pixels present in the local area of the target point; /(I)Representing the number of secondary directions of the target point; /(I)Expressed as the number of pixels in each sub-direction from the target point in the local area; /(I)Representing the target point and the/>, in its local areaDifferences in gradient values of the individual pixels; /(I)Representing the position along the/>, starting from the target point, within the local areaSecond direction/>Average value of gradient values of each pixel point and adjacent pixel points; Representing the/>, in the v-th direction from the target point, within the local region Gradient difference of the individual pixels; /(I)As a function of absolute value; /(I)Representing the target point and the/>, in its local areaThe difference values of the gradient values of the pixel points are accumulated and averaged, the larger the value is, the larger the gradient change in the local area of the target point is indicated, and the larger the corresponding final texture feature expression degree is; /(I)The larger the value of (c), the more complex and rich the texture of the local area of the target point; /(I)The larger the value of (c) is, the more intense edges and more pronounced textures may be present in the image, so by/>The combination of the two to determine the gradient change in each direction; /(I)For/>Belonging to the interval/>All at inner timeThe minimum value in (2) represents the texture feature expression level determined from the local area, and the texture feature expression level of the local area is represented by analyzing the gradient change in the secondary direction and selecting the secondary direction with the minimum value.
So far, the texture feature expression degree of the target point is obtained.
Step S006: and obtaining an enhanced image of the food material gray level image according to the density distribution expression degree and the texture feature expression degree of the target point.
The density distribution expression degree and the texture feature expression degree of the local area of each target point are determined through the steps, and the final purpose of the embodiment is to obtain areas with different degrees of expression, so that higher weight can be given to the areas according to better feature expression, two features are integrated into one feature parameter, and finally, the areas to be analyzed of the feature parameters with different degrees are obtained through clustering. The calculation formula of the comprehensive characteristic parameters of the target point is as follows:
Wherein, Representing the comprehensive characteristic parameters of the target point; /(I)Representing the density distribution expression degree of the target point; /(I)Representing the texture feature expression degree of the target point; /(I)Normalizing the data values to/>, as a linear normalization functionWithin the interval.
According to the mode, the comprehensive characteristic parameters of each pixel point in the food material gray level image are obtained.
In the food material gray level image, clustering is carried out on the food material gray level image according to the comprehensive characteristic parameters of each pixel point, and the food material gray level image is divided into a plurality of areas to be analyzed. What needs to be described is: the embodiment is realized byThe clustering algorithm divides the food gray level image into a plurality of clusters, each cluster is a region to be analyzed, the number of clusters preset by the clustering algorithm in this embodiment is 6, and this is described by way of example, other values can be set in other embodiments, and this embodiment is not limited, so that a plurality of regions to be analyzed with different degrees are obtained. /(I)The clustering algorithm is a well-known technique, and a specific method is not described here. Fig. 3 is a schematic diagram of clustering results of food gray images.
And respectively carrying out histogram equalization operation on each region to be analyzed divided by the food material gray level image by using a self-adaptive histogram equalization algorithm to obtain an enhanced image of the food material gray level image. The adaptive histogram equalization algorithm is a well known technique, and the specific method is not described here. Fig. 4 is a food material gray image and a gray histogram thereof, and fig. 5 is an enhanced image of the food material gray image and a gray histogram thereof.
Thus, an enhanced image of the food material gray level image is obtained.
Step S007: and identifying the enhanced image of the food material gray level image through the trained CNN neural network to obtain the food material type.
The enhanced image of the food material gray level image is obtained through the steps, the enhanced image of the obtained food material gray level image is subjected to food material type identification through a CNN neural network, the input of the neural network is the enhanced image of the food material gray level image, and the output is the food material type. What needs to be described is: the neural network model in this embodiment is ResNet500,500, and other models may be used in other embodiments, which are not limited.
The training set is obtained by obtaining a large number of enhanced images of food material gray level images, and the food material type is used as a label of a sample, for example, apple is marked as1, banana is marked as 2, the obtained data set is utilized to train the neural network, the loss function used is a cross entropy loss function, the specific training process is well known, and the specific details of the embodiment are omitted.
The present invention has been completed.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the invention, but any modifications, equivalent substitutions, improvements, etc. within the principles of the present invention should be included in the scope of the present invention.
Claims (1)
1. A food material image recognition method for a food crusher, the method comprising the steps of:
acquiring food material gray level images in a food crusher, and marking any pixel point in the food material gray level images as a target point;
Obtaining the gap expression degree of the target point according to the gray value difference of the pixel points around the target point;
Obtaining brightness variation distribution expression of the target point according to the difference between the gray values of the pixel points of the target point in a plurality of directions;
obtaining the density distribution expression degree of the target point according to the gap expression degree and the brightness change distribution expression of the target point;
Obtaining the texture feature expression degree of the target point according to the differences among pixel point gradient values of the target point in multiple directions;
obtaining an enhanced image of the food material gray level image according to the density distribution expression degree and the texture feature expression degree of the target point;
Identifying the enhanced image of the food material gray level image through the trained CNN neural network to obtain the food material type;
The method for obtaining the gap expression degree of the target point according to the gray value difference of the pixel points around the target point comprises the following specific steps:
In the food material gray level image, a target point is taken as the center to construct a food material gray level image with the size of In which,/>The side length of the preset local area is set; front/>, which minimizes gray values in local regionsEach pixel point is marked as a reference point,/>Is a preset quantity threshold;
In the local area, the rectangular area formed by the reference points is marked as a first layer rectangular expansion area, the rectangular area formed by all adjacent pixel points of the reference points is marked as a second layer rectangular expansion area, and the like, M layers of rectangular expansion areas of the reference points are obtained, The number of layers of the preset rectangular expansion area is a threshold value;
Obtaining the gap expression degree of the target point according to the Euclidean distance between the target point and the reference point and the gray value of the pixel point in each layer of rectangular expansion area of the reference point;
And obtaining the gap expression degree of the target point according to the Euclidean distance between the target point and the reference point and the gray value of the pixel point in each layer of rectangular expansion area of the reference point, wherein the corresponding specific formula is as follows:
Wherein, Representing the gap expression degree of the target point; /(I)Also the number of reference points; /(I)Representing the target point and the/>Euclidean distance between the reference points; /(I)Represents the/>Gray values of the reference points; /(I)Represents the/>First/>, of the reference pointsAverage gray values of all pixel points in the rectangular extension area of the layer; /(I)Represents the/>First/>, of the reference pointsStandard deviation of gray values of all pixel points in the layer rectangle expansion area; /(I)An exponential function that is based on a natural constant; /(I)As a function of absolute value;
The brightness change distribution of the target point is obtained according to the difference between the gray values of the pixel points of the target point in a plurality of directions, and the method comprises the following specific steps:
the direction of the eight neighborhood of the target point is recorded as the secondary direction of the target point; the opposite secondary directions of the target point are marked as a group of collinear directions; obtaining brightness variation distribution expression of the target point according to gray values of the pixel points in two secondary directions in the collinear direction;
And obtaining brightness change distribution expression of the target point according to gray values of the pixel points in two secondary directions in the collinear direction, wherein the corresponding specific formula is as follows:
Wherein, A brightness change distribution representation representing the target point; /(I)Representing the number of collinear directions; /(I)Representing the number of pixels in each sub-direction from the target point in the local area; /(I)Representing the position along the/>, starting from the target point, within the local areaFirst/>, in the 1 st one of the collinear directionsGray values of the individual pixels; /(I)Representing the position along the/>, starting from the target point, within the local areaFirst/>, in the 1 st one of the collinear directionsGray values of the individual pixels; /(I)Representing the position along the/>, starting from the target point, within the local areaFirst/>, second order in the collinear directionGray values of the individual pixels; /(I)Representing the position along the/>, starting from the target point, within the local areaFirst/>, second order in the collinear directionGray values of the individual pixels; for/> Belonging to the interval/>All/>, inner timeStandard deviation of (2); /(I)For/>Belonging to the intervalAll/>, inner timeStandard deviation of (2); /(I)For/>Belonging to the interval/>All/>, inner timeMaximum value of (2);
the method for obtaining the density distribution expression level of the target point according to the gap expression level and the brightness change distribution expression of the target point comprises the following specific steps:
The ratio of the brightness change distribution expression of the target point to the gap expression degree of the target point is recorded as the density distribution expression degree of the target point;
According to the difference between pixel point gradient values of the target point in a plurality of directions, the texture feature expression degree of the target point is obtained, and the method comprises the following specific steps:
Using a sobel operator to obtain a gradient value of each pixel point in the food material gray level image;
Calculating the position from the target point in the local area First/>, in the secondary directionThe average value of gradient values of all adjacent pixel points of each pixel point is obtained by starting from a target point in a local area along the first/>First/>, in the secondary directionThe difference between the gradient value of each pixel point and the mean value is recorded as the first/>, starting from the target point, in the local areaFirst/>, in the secondary directionGradient difference of the individual pixels;
obtaining the texture feature expression degree of the target point according to the difference value of the gradient values of the pixel points in the target point and the local area, the gradient change in each secondary direction and the gradient difference of each pixel point in the local area along each secondary direction from the target point;
According to the difference value of gradient values of the pixel points in the target point and the local area thereof, gradient change in each secondary direction and gradient difference of each pixel point in the local area along each secondary direction from the target point, the texture feature expression degree of the target point is obtained, and the corresponding specific formula is as follows:
Wherein, Representing the texture feature expression degree of the target point; /(I)Representing the number of pixels present in the local area of the target point; /(I)Representing the number of secondary directions of the target point; /(I)Expressed as the number of pixels in each sub-direction from the target point in the local area; /(I)Representing the target point and the/>, in its local areaDifferences in gradient values of the individual pixels; /(I)Representing the position along the/>, starting from the target point, within the local areaSecond direction/>Average value of gradient values of each pixel point and adjacent pixel points; representing the position along the/>, starting from the target point, within the local area Second direction/>Gradient difference of the individual pixels; for/> Belonging to the interval/>All/>, inner timeIs the minimum value of (a);
The method for obtaining the enhanced image of the food material gray level image according to the density distribution expression degree and the texture feature expression degree of the target point comprises the following specific steps:
obtaining comprehensive characteristic parameters of the target point according to the density distribution expression degree and the texture characteristic expression degree of the target point;
according to the comprehensive characteristic parameters of each pixel point, use Clustering the food gray level images by a clustering algorithm, and dividing the food gray level images into a plurality of areas to be analyzed;
Respectively carrying out histogram equalization operation on each region to be analyzed divided by the food material gray level image by using a self-adaptive histogram equalization algorithm to obtain an enhanced image of the food material gray level image;
and obtaining comprehensive characteristic parameters of the target point according to the density distribution expression degree and the texture characteristic expression degree of the target point, wherein the corresponding specific calculation formula is as follows:
Wherein, Representing the comprehensive characteristic parameters of the target point; /(I)Representing the density distribution expression degree of the target point; /(I)Representing the texture feature expression degree of the target point; /(I)Is a linear normalization function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410362374.1A CN117975444B (en) | 2024-03-28 | 2024-03-28 | Food material image recognition method for food crusher |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410362374.1A CN117975444B (en) | 2024-03-28 | 2024-03-28 | Food material image recognition method for food crusher |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117975444A CN117975444A (en) | 2024-05-03 |
CN117975444B true CN117975444B (en) | 2024-06-14 |
Family
ID=90849885
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410362374.1A Active CN117975444B (en) | 2024-03-28 | 2024-03-28 | Food material image recognition method for food crusher |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117975444B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115994907A (en) * | 2023-03-22 | 2023-04-21 | 济南市莱芜区综合检验检测中心 | Intelligent processing system and method for comprehensive information of food detection mechanism |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018040105A1 (en) * | 2016-09-05 | 2018-03-08 | 合肥华凌股份有限公司 | System and method for food recognition, food model training method, refrigerator and server |
CN116168025B (en) * | 2023-04-24 | 2023-07-07 | 日照金果粮油有限公司 | Oil curtain type fried peanut production system |
CN116645363B (en) * | 2023-07-17 | 2023-10-13 | 山东富鹏生物科技有限公司 | Vision-based starch production quality real-time detection method |
CN116758074B (en) * | 2023-08-18 | 2024-04-05 | 长春市天之城科技有限公司 | Multispectral food image intelligent enhancement method |
CN117011303B (en) * | 2023-10-08 | 2024-01-09 | 泰安金冠宏油脂工业有限公司 | Oil production quality detection method based on machine vision |
-
2024
- 2024-03-28 CN CN202410362374.1A patent/CN117975444B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115994907A (en) * | 2023-03-22 | 2023-04-21 | 济南市莱芜区综合检验检测中心 | Intelligent processing system and method for comprehensive information of food detection mechanism |
Non-Patent Citations (1)
Title |
---|
基于图像处理的大米品质检测系统研究;崔雯雯;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150815(第08期);1-103 * |
Also Published As
Publication number | Publication date |
---|---|
CN117975444A (en) | 2024-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112036335B (en) | Inverse convolution guided semi-supervised plant leaf disease identification and segmentation method | |
CN105930815B (en) | Underwater organism detection method and system | |
CN110443778B (en) | Method for detecting irregular defects of industrial products | |
CN111986126B (en) | Multi-target detection method based on improved VGG16 network | |
CN111986125A (en) | Method for multi-target task instance segmentation | |
CN110503140B (en) | Deep migration learning and neighborhood noise reduction based classification method | |
Niu et al. | Image segmentation algorithm for disease detection of wheat leaves | |
CN105787948A (en) | Quick graph cutting method based on multiple deformation resolutions | |
CN108710916A (en) | The method and device of picture classification | |
CN113052859A (en) | Super-pixel segmentation method based on self-adaptive seed point density clustering | |
CN114913138A (en) | Method and system for detecting defects of pad printing machine product based on artificial intelligence | |
CN111783885A (en) | Millimeter wave image quality classification model construction method based on local enhancement | |
CN116934787A (en) | Image processing method based on edge detection | |
CN117522864B (en) | European pine plate surface flaw detection method based on machine vision | |
CN117314940B (en) | Laser cutting part contour rapid segmentation method based on artificial intelligence | |
CN113223098B (en) | Preprocessing optimization method for image color classification | |
CN112446417B (en) | Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation | |
CN114612450A (en) | Image detection segmentation method and system based on data augmentation machine vision and electronic equipment | |
CN112800968B (en) | HOG blocking-based feature histogram fusion method for identifying identity of pigs in drinking area | |
CN116934761B (en) | Self-adaptive detection method for defects of latex gloves | |
CN117975444B (en) | Food material image recognition method for food crusher | |
CN115841600B (en) | Deep learning-based sweet potato appearance quality classification method | |
CN112489049A (en) | Mature tomato fruit segmentation method and system based on superpixels and SVM | |
CN115100509B (en) | Image identification method and system based on multi-branch block-level attention enhancement network | |
CN115018729B (en) | Content-oriented white box image enhancement method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |