CN116758074B - Multispectral food image intelligent enhancement method - Google Patents

Multispectral food image intelligent enhancement method Download PDF

Info

Publication number
CN116758074B
CN116758074B CN202311040354.4A CN202311040354A CN116758074B CN 116758074 B CN116758074 B CN 116758074B CN 202311040354 A CN202311040354 A CN 202311040354A CN 116758074 B CN116758074 B CN 116758074B
Authority
CN
China
Prior art keywords
value
sliding window
dispersion
window area
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311040354.4A
Other languages
Chinese (zh)
Other versions
CN116758074A (en
Inventor
课净璇
段培培
王亚斌
谢安国
马艳莉
毕奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Tianzhicheng Technology Co ltd
Nanyang Institute of Technology
Original Assignee
Changchun Tianzhicheng Technology Co ltd
Nanyang Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Tianzhicheng Technology Co ltd, Nanyang Institute of Technology filed Critical Changchun Tianzhicheng Technology Co ltd
Priority to CN202311040354.4A priority Critical patent/CN116758074B/en
Publication of CN116758074A publication Critical patent/CN116758074A/en
Application granted granted Critical
Publication of CN116758074B publication Critical patent/CN116758074B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image enhancement, in particular to an intelligent enhancement method for multispectral food images. The method comprises the steps of obtaining multispectral images of food to be detected; setting a sliding window to slide in the multispectral image, acquiring gray level difference dispersion of the sliding window area according to gray level values of pixel points in the sliding window area, and screening out a fuzzy sliding window area; acquiring an optimal k value in a k-means clustering algorithm according to gray value distribution of pixel points in a fuzzy sliding window area; and clustering the multispectral images according to the optimal k value to obtain the areas of various substances in the food to be detected. According to the method, the multispectral images are clustered according to the optimal k value, so that the areas with different colors in the multispectral images are accurately classified, the areas of various substances in the food to be detected are accurately obtained, and the food to be detected is conveniently and accurately detected.

Description

Multispectral food image intelligent enhancement method
Technical Field
The invention relates to the technical field of image enhancement, in particular to an intelligent enhancement method for multispectral food images.
Background
Food safety problems have been one of the issues of close social concern, and there are many food safety problems to be further solved while rapidly developing the food industry. For example, it is very important to detect foods because materials such as additives in foods are not used normally, impurities are mixed in the course of food processing, and food materials are degraded.
Traditional food detection mainly depends on chemical analysis and instrument analysis, and most of the food detection is supported by a laboratory, is easily affected by long detection period, limited detection samples, large detection workload, high detection cost and high cost and the like, and cannot accurately detect substances in food.
The existing method adopts multispectral imaging technology, which can rapidly detect the quality of food without damaging the food. The multi-spectral imaging technology can be used for acquiring various spectral information of food, detecting the content of various substances such as moisture content in the food, and detecting the integrity of the food. However, the multispectral imaging technology is greatly affected by environmental factors, which may make the imaging of various substances in food unclear or blurred in multispectral images, and thus the various substances in food cannot be accurately divided, resulting in inaccurate food detection.
Disclosure of Invention
In order to solve the technical problems that imaging of various substances in food in a multispectral image is unclear, and then the food can not be accurately divided to cause inaccurate detection of the food, the invention aims to provide an intelligent enhancement method for the multispectral food image, which adopts the following technical scheme:
the invention provides an intelligent enhancement method for multispectral food images, which comprises the following steps:
acquiring multispectral images of food to be detected;
sliding windows with preset sizes are set to slide in the multispectral image according to preset step sizes, and gray level difference dispersion of each sliding window area is obtained according to gray level values of pixel points in each sliding window area; screening out a fuzzy sliding window area according to the gray level difference dispersion;
acquiring an optimal k value in a k-means clustering algorithm according to gray value distribution of pixel points in the fuzzy sliding window area; clustering the multispectral images according to the optimal k value to obtain areas of various substances in the food to be detected;
the method for obtaining the optimal k value comprises the following steps: setting the initial k value, selecting k target pixel points from each fuzzy sliding window area, acquiring target dispersion of each fuzzy sliding window area according to the gray value of the target pixel points in each fuzzy sliding window area, and acquiring a target value according to the size and distribution of the target dispersion; updating the initial k value according to a preset updating step length, acquiring an updated target value according to the updated k value, and taking the k value corresponding to the update as an optimal k value when the updated target value meets a preset condition.
Further, the method for acquiring the gray scale difference dispersion comprises the following steps:
optionally selecting a sliding window area as a reference sliding window area, and acquiring the absolute value of the difference value between the gray values of every two pixel points in the reference sliding window area as a gray level difference;
and acquiring the standard deviation of the gray scale difference as the gray scale difference dispersion of the reference sliding window area.
Further, the method for screening out the fuzzy sliding window area comprises the following steps:
sequencing the gray level difference dispersion of each sliding window area from small to large to obtain a dispersion sequence;
taking every two adjacent gray level difference dispersions in the dispersion sequence as a matching pair;
obtaining the difference degree of the matching pair according to the difference of the gray level difference dispersion in the matching pair;
obtaining the average value of the difference degrees of all the matching pairs as a standard value;
and screening out the fuzzy sliding window area according to the standard value.
Further, the method for obtaining the difference degree comprises the following steps:
and taking the absolute value of the difference value between the two gray level difference dispersions in the matching pair as the difference degree of the matching pair.
Further, the method for screening out the fuzzy sliding window area according to the standard value comprises the following steps: arranging the positions of the matching pairs in the dispersion sequence according to the corresponding gray level difference dispersion to obtain a matching pair sequence;
according to the sequence of the matching pairs, the matching pair which first appears and is corresponding to the difference degree that the error of the standard value is smaller than the preset error is used as a critical matching pair;
taking the gray level difference dispersion larger than or equal to the second gray level difference dispersion in the critical matching pair in the dispersion sequence as an abnormal gray level difference dispersion; the arrangement order of the gray level difference dispersion in the matching pair is consistent with the arrangement order in the dispersion sequence;
and taking the sliding window area corresponding to the abnormal gray level difference dispersion as a fuzzy sliding window area.
Further, the method for selecting k target pixel points from each fuzzy sliding window area and obtaining the target dispersion of each fuzzy sliding window area according to the gray value of the target pixel point in each fuzzy sliding window area comprises the following steps:
optionally selecting a fuzzy sliding window area as a target fuzzy sliding window area, selecting k pixel points in the target fuzzy sliding window area as reference pixel points, and acquiring the gray level difference of every two arbitrary reference pixel points as a reference gray level difference;
acquiring the variance of the reference gray level difference as a reference variance;
changing reference pixel points to obtain all reference variances in the target fuzzy sliding window area;
taking k pixel points corresponding to the maximum reference variance as the target pixel points;
the maximum reference variance is taken as the target dispersion of the target fuzzy sliding window area.
Further, the method for obtaining the target value includes:
acquiring the average value of all the target dispersions as an average dispersion;
normalizing the average dispersion to obtain an overall difference degree;
dividing equal target dispersion into the same type of dispersion, and obtaining the number of fuzzy sliding window areas corresponding to each type of dispersion as the type distribution number;
acquiring the total number of the fuzzy sliding window areas as fuzzy quantity;
taking the ratio of the type distribution quantity of each type of dispersion to the fuzzy quantity as the type probability of the corresponding type of dispersion;
acquiring entropy of the type probability as a target entropy;
the result of carrying out negative correlation and normalization on the target entropy is used as a target discrimination value;
and obtaining a target value according to the overall difference degree and the target discrimination value.
Further, the method for obtaining the target value according to the overall difference degree and the target discrimination value comprises the following steps:
and taking the addition result of the integral difference degree and the target discrimination value as a target value.
Further, the method for obtaining the optimal k value includes:
increasing the initial k value by a preset updating step length to obtain an updated k value until a preset cut-off condition is met, and ending updating;
acquiring an updated target value according to the updated k value;
and taking the k value corresponding to the maximum target value as the optimal k value.
Further, the method for clustering the multispectral images according to the optimal k value to obtain the areas of various substances in the food to be detected comprises the following steps:
acquiring categories in the multispectral image through a k-means clustering algorithm according to the optimal k value;
the area corresponding to each category is the area of various substances in the food to be detected.
The invention has the following beneficial effects:
according to the gray value of the pixel point in each sliding window area, the gray difference dispersion degree of each sliding window area is obtained, and the color distribution complexity in each sliding window area is directly reflected, so that the fuzzy sliding window area can be accurately screened out according to the gray difference dispersion degree; setting the initial k value, selecting k target pixel points from each fuzzy sliding window area, and judging whether the k value is reasonable or not according to the number of the target pixel points; according to the gray value of the target pixel point in each fuzzy sliding window area, the target dispersion of each fuzzy sliding window area is obtained, whether the target pixel point is a pixel point of a different substance or not is determined, and whether the k value is reasonable or not is indirectly reflected; obtaining a target value according to the size and distribution of the target dispersion, and determining an optimal k value in a k-means clustering algorithm according to the target value; updating the initial k value according to a preset updating step length, updating a corresponding target value, and when the updated target value meets a preset condition, correspondingly updating the k value to be the most fit with the type number of substances in the food to be detected, so that the correspondingly updated k value is used as the optimal k value in a k-means clustering algorithm; the multispectral images are clustered according to the optimal k value, so that different color areas in the multispectral images are accurately classified, various substances in the food to be detected are more clearly divided, the areas of various substances in the food to be detected are accurately obtained, and accurate detection of the food to be detected is facilitated.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a multispectral food image intelligent enhancement method according to an embodiment of the invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a specific implementation, structure, characteristics and effects of the intelligent enhancement method for multispectral food images according to the invention, which are described in detail below with reference to the accompanying drawings and the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the intelligent enhancement method for the multispectral food image provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flow chart of a multispectral food image intelligent enhancement method according to an embodiment of the invention is shown, the method comprises the following steps:
step S1: and acquiring multispectral images of the food to be detected.
In particular, food is a necessity in our life, and it is important to ensure the quality and safety of food. Since microorganisms in the food are not visible to the human eye. In the existing method, the surface image of the food is acquired through a camera, the surface image of the food is segmented, different areas in the food are acquired, and substances in the food are determined, but the internal substances of the food cannot be determined, so that the food cannot be accurately detected. In order to clearly acquire each material area in food, the embodiment of the invention can accurately detect various materials in food while the food is not damaged by using a multispectral imaging technology, namely, multispectral images containing the food to be detected are acquired by using the multispectral imaging technology, wherein the acquired multispectral images contain no food to be detected, possibly noise and non-food areas to be detected, so that the acquired multispectral images are preprocessed, noise in the multispectral images is removed, and multispectral images of only the food to be detected are acquired.
In the embodiment of the invention, the Gaussian filter is used for denoising the multispectral image, and in another embodiment of the invention, other methods such as median filter and Total Variation (TV) method denoising can be used for denoising the multispectral image. The gaussian filtering, median filtering and Total Variation (TV) methods are all the prior art and will not be described in detail.
The embodiment of the invention uses a semantic segmentation network to acquire multispectral images of only foods to be detected. The semantic segmentation network provided by the embodiment of the invention uses a U-net neural network, and the input is a denoised multispectral image containing food to be detected; outputting multispectral images of only foods to be detected; the training and labeling mode of the U-net neural network is as follows: marking the area of the food to be detected as 1, and marking other areas as 0; the loss function of the U-net neural network is a cross entropy loss function. The U-net neural network is a known technology, and will not be described herein.
It should be noted that, the multispectral images appearing later in the embodiment of the invention are multispectral images only containing food to be detected.
In the multispectral image, the multispectral image under the same color system is the gray level image, so that the gray level value of each pixel point in the multispectral image can be directly obtained without graying the multispectral image. The colors of different material areas in the multispectral image have certain difference, the meanings represented by the different colors are different, and the embodiment of the invention clusters the areas with similar or same colors aiming at the distribution of the different colors in the multispectral image, so that various materials in food to be detected are accurately obtained, thereby realizing the aim of the embodiment of the invention and achieving the effect of enhancing the multispectral image.
Step S2: sliding windows with preset sizes are set to slide in the multispectral image according to preset step sizes, and gray level difference dispersion of each sliding window area is obtained according to gray level values of pixel points in each sliding window area; and screening out the fuzzy sliding window area according to the gray level difference dispersion.
In particular, the embodiment of the invention takes the cranberry as the food to be detected, and the cranberry contains dietary fiber which is an indispensable nutritional ingredient in healthy diet, and has important roles in not only being closely related to human health, but also preventing certain diseases and improving certain diseases. How to maximize the effect of dietary fiber, accurate detection of dietary fiber content in cranberries is needed. Therefore, the multispectral image of the cranberry is obtained, and the multispectral image of the cranberry is subjected to enhancement processing, so that the dietary fiber area in the cranberry is clearer in the multispectral image, and further the dietary fiber content in the cranberry is detected more accurately. The enhancement processing is performed on the multi-spectral image of the cranberry, and mainly, various material areas in the multi-spectral image of the cranberry are accurately divided, so that dietary fiber areas in the multi-spectral image are accurately determined. In order to improve the area for accurately acquiring dietary fibers in cranberries, the embodiment of the invention setsThe sliding window with the size is set to be 1 in the preset step length, sliding is carried out in the multi-spectrum image of the cranberry, each pixel point in the multi-spectrum image is provided with a corresponding sliding window area, and an operator can set the size of the sliding window and the preset step length of sliding according to actual conditions without limitation. Analyzing each sliding window area in the cranberry multispectral image, determining dietary fiber areas in the multispectral image according to the distribution condition of gray values of pixel points in each sliding window area, wherein the distribution of gray values in some sliding window areas is complex, and dietary fiber areas in the sliding window areas cannot be clearly distinguishedAnd (3) carrying out line analysis, and accurately acquiring the dietary fiber area in the cranberry. The method for acquiring the fuzzy sliding window area comprises the following steps:
(1) And acquiring gray level difference dispersion.
The difference between gray values of every arbitrary two pixel points in each sliding window area is obtained, and the larger the difference is, the more different the corresponding two pixel points are, so that the category of the substances in the sliding window area can be reflected according to the fluctuation degree of the difference between the gray values of the two pixel points. According to the embodiment of the invention, the gray level difference dispersion of each sliding window area is obtained according to the difference between gray level values of every any two pixel points in each sliding window area, and the complex situation in the sliding window area is reflected.
Preferably, the method for obtaining the gray level difference dispersion is as follows: optionally selecting a sliding window area as a reference sliding window area, and acquiring the absolute value of the difference value between the gray values of every two pixel points in the reference sliding window area as a gray level difference; and obtaining the standard deviation of the gray scale difference as the gray scale difference dispersion of the reference sliding window area.
As an example, an i-th sliding window area in the multispectral image is selected as a reference sliding window area, the absolute value of the difference value between the gray values of every two arbitrary pixel points in the i-th sliding window area is obtained as a gray level difference, all gray level differences in the i-th sliding window area are obtained, and the standard deviation of the gray level differences is obtained as the gray level difference dispersion of the i-th sliding window area. Thus, the gray scale difference dispersion of the ith sliding window area is obtainedThe formula of (2) is:
in the method, in the process of the invention,gray scale difference dispersion for the ith sliding window area; n is the total number of gray level differences in the ith sliding window area; />An mth gray scale difference in the ith sliding window area; />Is the average of all gray differences in the ith sliding window area.
It should be noted that the number of the substrates,the larger the pixel points in the ith sliding window area are, the more the pixel points with larger gray gradient are likely to exist, namely the more the pixel points with different color jumps in the ith sliding window area are, the more the substance types in the ith sliding window area are; />The smaller the difference of the gray level differences in the ith sliding window area is, the smaller the gray level values of the pixel points in the ith sliding window area are, the more similar the colors in the ith sliding window area are, and the more likely substances in the ith sliding window area are the same substances.
And according to the method for acquiring the gray level difference dispersion of the ith sliding window area, acquiring the gray level difference dispersion of each sliding window area.
(2) And acquiring a fuzzy sliding window area.
Each sliding window area has corresponding gray level difference dispersion, the smaller the gray level difference dispersion is, the more uniform the color distribution in the corresponding sliding window area is, the less the material types in the sliding window area are, and conversely, the larger the gray level difference dispersion is, the more complex the color distribution in the corresponding sliding window area is, the more the material types in the sliding window area are, so that the adaptability of clustering operation to fuzzy boundaries among different materials can be effectively improved by distinguishing the sliding window areas with large gray level difference dispersion.
Preferably, the method for acquiring the fuzzy sliding window area comprises the following steps: sequencing the gray level difference dispersion of each sliding window area from small to large to obtain a dispersion sequence; taking every two adjacent gray level difference divergences in the divergences sequence as a matching pair; taking the absolute value of the difference between the two gray level difference dispersions in the matching pair as the difference degree of the matching pair; obtaining the average value of the difference degrees of all matched pairs as a standard value; and arranging the matching pairs according to the positions of the corresponding gray level difference dispersion in the dispersion sequence to obtain a matching pair sequence. According to the sequence of the matching pairs, the matching pair which first appears and corresponds to the difference degree that the error of the standard value is smaller than the preset error is used as a critical matching pair; taking the gray level difference dispersion larger than or equal to the second gray level difference dispersion in the critical matching pair in the dispersion sequence as the abnormal gray level difference dispersion; the arrangement sequence of gray level difference dispersion in the matching pair is consistent with the arrangement sequence in the dispersion sequence; and taking a sliding window area corresponding to the abnormal gray level difference dispersion as a fuzzy sliding window area.
As an example, gray scale difference dispersions of each sliding window region in a multispectral image are ordered in order from small to large, resulting in a dispersion sequence. And starting from the first gray level difference dispersion in the dispersion sequence, taking every two adjacent gray level difference dispersions in the dispersion sequence as a matching pair, sequentially acquiring the matching pair in the dispersion sequence, for example, taking the first gray level difference dispersion and the second gray level difference dispersion in the dispersion sequence as a matching pair, and taking the second gray level difference dispersion and the third gray level difference dispersion in the dispersion sequence as a matching pair until the last gray level difference dispersion in the dispersion sequence. Wherein the order of gray scale difference dispersion in the matching pair is consistent with the order of the front-to-back in the dispersion sequence. The absolute value of the difference between the two gray scale difference dispersions in the matching pair, namely the difference between the second gray scale difference dispersion and the first gray scale difference dispersion in the matching pair, is used as the difference degree of the matching pair, and the second gray scale difference dispersion in the matching pair is necessarily larger than or equal to the first gray scale difference dispersion, so the difference degree is necessarily a non-negative number. Obtaining the average value of the difference degrees of all matched pairs as a standard value; and arranging the matching pairs according to the sequence of the positions of the corresponding gray level difference dispersion in the dispersion sequence to obtain a matching pair sequence. The matching pair corresponding to the first gray level difference dispersion and the second gray level difference dispersion in the dispersion sequence is the first matching pair in the matching pair sequence; matching the second gray level difference dispersion in the dispersion sequence with the third gray level difference dispersion, wherein the pair is the second matching pair in the matching pair sequence; the matching pair corresponding to the last gray level difference dispersion and the last gray level difference dispersion in the dispersion sequence is the last matching pair in the matching pair sequence. Starting from the first matching pair of the matching pair sequence, comparing the difference degree of the matching pair with a standard value, and taking the matching pair corresponding to the difference degree equal to the standard value as a critical matching pair. In the actual situation, all the difference degrees are not equal to the standard value, in order to avoid that the critical matching pair cannot be found, the preset error is set in the embodiment of the invention, wherein the preset error is set to be small, the embodiment of the invention is set to be 0.1, and an implementer can set according to the actual situation without limitation. Taking the second gray level difference dispersion in the critical matching pair as a threshold value for dividing a clear sliding window area and a fuzzy sliding window area in the multispectral image, namely, a sliding window area corresponding to the gray level difference dispersion which is larger than or equal to the second gray level difference dispersion in the critical matching pair in a dispersion sequence is a fuzzy sliding window area, wherein the boundary between different color areas in the fuzzy sliding window area is relatively fuzzy, namely, the areas of different substances are not clear; the sliding window area corresponding to the gray level difference dispersion smaller than the second gray level difference dispersion in the critical matching pair in the dispersion sequence is a clear sliding window area, and the color distribution in the clear sliding window area is obvious and uniform, namely the material types in the clear sliding window area are single. In order to facilitate the accurate subsequent division of the fuzzy sliding window area, the embodiment of the invention screens or covers all the clear sliding window areas, thereby avoiding the interference of the clear sliding window area on the subsequent analysis of the fuzzy sliding window area. Thus, a fuzzy sliding window area in the multispectral image is acquired.
Step S3: according to the gray value distribution of the pixel points in the fuzzy sliding window area, the optimal k value in the k-means clustering algorithm is obtained, and the multispectral images are clustered according to the optimal k value, so that the areas of various substances in the food to be detected are obtained.
Specifically, the known fuzzy sliding window area in the multispectral image cannot clearly divide different substance areas, so that the division of each substance area of the food to be detected is inaccurate. In order to accurately acquire each material area of the food to be detected, and further accurately detect the food to be detected, the k-means clustering algorithm is improved, so that the k value in the k-means clustering algorithm is equal to the material types in the food to be detected, namely cranberries, and after clustering, the different color areas in the multispectral image are more accurately divided, and because the different color areas in the multispectral image represent the different material types, the different materials in the food to be detected are more accurately divided. Different color distributions exist in the fuzzy sliding window area, namely a plurality of substance types in food to be detected in the fuzzy sliding window area, so that an optimal k value in a k-means clustering algorithm is obtained according to the fuzzy sliding window area. The k-means clustering algorithm is an iterative solution clustering analysis algorithm, and comprises the steps of randomly selecting k objects as initial clustering centers, then calculating the distance between each object and each clustering center, and distributing each object to the category of the closest clustering center. The cluster center is moved to the position of the nearest object in the data set, and the object in the position is classified as the cluster center. The above steps are repeated until the cluster center is not changed or the maximum iteration number is reached, and the known technique of the k-means clustering algorithm is not described in detail.
The method for obtaining the optimal k value comprises the following steps: setting the initial k value, selecting k target pixel points from each fuzzy sliding window area, acquiring target dispersion of each fuzzy sliding window area according to the gray value of the target pixel points in each fuzzy sliding window area, and acquiring a target value according to the size and distribution of the target dispersion; updating the initial k value according to a preset updating step length, acquiring an updated target value according to the updated k value, and taking the k value corresponding to the update as an optimal k value when the updated target value meets a preset condition. The specific method for obtaining the optimal k value is as follows:
as an example, the initial k value is set to 3 according to the embodiment of the present invention, and the embodiment may be set by an operator according to actual situations, which is not limited herein. The k value is updated according to the subsequently set conditions, in order to determine whether the k value is reasonable, the number of pixels which are the same as the k value is selected from each fuzzy sliding window area, the k pixels selected from each fuzzy sliding window area are analyzed, the target dispersion of each fuzzy sliding window area is obtained, a target value is obtained according to the target dispersion, and the optimal k value is determined according to the target value.
(1) And obtaining the target dispersion.
The optimal k value essence is the total number of species in the food to be detected, so that different species in the food to be detected are accurately clustered. According to the maximum difference between the selected pixels in each fuzzy area, whether the selected pixels are pixels of different substances or not can be indirectly reflected, and whether the number of the selected pixels is reasonable or not is further determined.
Preferably, the method for acquiring the target dispersion is as follows: optionally selecting a fuzzy sliding window area as a target fuzzy sliding window area, selecting k pixel points from the target fuzzy sliding window area as reference pixel points, and acquiring gray level difference between every two arbitrary reference pixel points as reference gray level difference; acquiring a variance of the reference gray level difference as a reference variance; changing reference pixel points to obtain all reference variances in the target fuzzy sliding window area; taking k pixel points corresponding to the maximum reference variance as target pixel points; the maximum reference variance is taken as the target dispersion of the target fuzzy sliding window area.
As an example, the r-th fuzzy sliding window area is selected as a target fuzzy sliding window area, k pixel points are selected from the r-th fuzzy sliding window area as reference pixel points, and the number of the reference pixel points is consistent with the k value. Taking the initial k value as an example, the target dispersion of the r-th fuzzy sliding window area is obtained. Wherein the initial k value is set to 3, so 3 reference pixel points are randomly selected from the r-th fuzzy sliding window area. The gray level difference of every arbitrary two reference pixel points is obtained, namely, the reference gray level difference is obtained, 3 reference pixel points are shared, the arbitrary two reference pixel points are grouped, then the reference gray level difference is 3, the variance of the reference gray level difference is obtained, namely, the reference variance reflects the difference degree of gray level values among the reference pixel points, the larger the reference variance is, the larger the difference degree of gray level values among the reference pixel points is, the more likely the selected reference pixel points are pixel points in different material areas, and the more likely the k value is the optimal k value. Therefore, each 3 different pixel points in the r-th fuzzy sliding window area are formed into reference pixel points, all the reference variances in the r-th fuzzy sliding window area are obtained, and the 3 pixel points corresponding to the maximum reference variance are taken as target pixel points; taking the maximum reference variance as the target dispersion of the r-th fuzzy sliding window area; i.e. the reference variance of the target pixel point is the target dispersion.
And according to the method for acquiring the target dispersion of the r-th fuzzy sliding window area, acquiring the target dispersion of each fuzzy sliding window area.
(2) A target value is obtained.
And acquiring a target value according to the size and the distribution of the target dispersion of each fuzzy sliding window area so as to determine an optimal k value according to the target value later, so that the area division in the multispectral image is more accurate.
Preferably, the method for obtaining the target value is as follows: acquiring the average value of all target dispersions as an average dispersion; normalizing the average dispersion as a whole difference degree; dividing equal target dispersion into the same type of dispersion, and obtaining the number of fuzzy sliding window areas corresponding to each type of dispersion as the type distribution number; acquiring the total number of the fuzzy sliding window areas as fuzzy quantity; the ratio of the type distribution quantity and the fuzzy quantity of each type of dispersion is used as the type probability of the corresponding type of dispersion; acquiring entropy of the type probability as a target entropy; the result of carrying out negative correlation and normalization on the target entropy is used as a target discrimination value; the result of the addition of the overall difference degree and the target discrimination value is set as a target value.
As an example, the average value of all the target dispersions in the multispectral image is obtained, namely the average dispersion, and the result of normalizing the average dispersion is taken as the overall difference degree, which is larger, so that the target pixel point is the pixel point of different substances in the food to be detected. Dividing equal target dispersion into the same type of dispersion, and obtaining the number of fuzzy sliding window areas corresponding to each type of dispersion, namely the type distribution number; acquiring the total number of the fuzzy sliding window areas, namely the fuzzy number; the ratio of the type distribution quantity and the fuzzy quantity of each type of dispersion is used as the type probability of the corresponding type of dispersion; obtaining entropy of type probability, namely target entropy; the result of carrying out negative correlation and normalization on the target entropy is used as a target discrimination value; the result of the addition of the overall difference degree and the target discrimination value is set as a target value. Therefore, the formula for obtaining the target value C is:
wherein C is a target value; n is the total number of fuzzy sliding window areas;target dispersion for the a-th fuzzy sliding window region; m is the total number of types of target dispersion; />The number of type distributions for the v-th type dispersion; tanh is a hyperbolic tangent function; exp is an exponential function based on a natural constant e; log is a logarithmic function based on 2.
The average dispersion wasThe larger the target dispersion, which indicates a blurred sliding window region, the more likely the substance types in the multispectral image are k, i.e. 3, the more likely 3 are the optimal k values, +.>The larger C is, the larger C is; because of->Is non-negative, thus, average dispersion +>Is of non-negative number>The value range of (2) is defined between 0 and 1; the embodiment of the invention uses the hyperbolic tangent function tanh to average dispersion +.>Normalization processing is performed, and in another example of the invention, average dispersion can be obtained through normalization methods such as sigmoid function, function conversion, maximum and minimum normalization, and the likeThe normalization process is performed, and is not limited thereto. Target entropy->The smaller the substance types in each fuzzy sliding window area are, the more uniform the substance types are, the more the k value is the quantity of all substance types in the food to be detected, namely the more the k value is the optimal k value, and the target discrimination value +.>The larger the C is. Thus, the larger C, the more likely the corresponding initial k value is the optimal k value.
(3) And obtaining an optimal k value.
Specifically, the larger the known target value is, the more likely the corresponding k value is the optimal k value, and in the embodiment of the invention, the k value is updated by setting the preset updating step length for the k value, the corresponding target value is obtained once each time the k value is obtained, updating iteration is continuously carried out until the preset cut-off condition is met, updating is finished, and the k value corresponding to the maximum target value is selected as the optimal k value.
Preferably, the method for obtaining the optimal k value is as follows: increasing the initial k value by a preset updating step length to obtain an updated k value until a preset cut-off condition is met, and ending updating; acquiring an updated target value according to the updated k value; and taking the k value corresponding to the maximum target value as the optimal k value.
In the embodiment of the invention, the preset updating step length is set to be 1, and an implementer can set according to actual conditions without limitation. The initial k value is set to 3, so that from 3, a corresponding target value is acquired, and the initial k value is increased by a preset updating step length to update, so that an updated k value is obtained. The method comprises the steps of avoiding the continuous updating of a k value, setting preset cut-off conditions, wherein one of the preset cut-off conditions is a set maximum k value, stopping the updating of the k value when the k value is the set maximum k value, and selecting the k value corresponding to the maximum target value as an optimal k value; and the other is to set preset updating times, when the updating times of the k values are equal to the preset updating times, stopping updating the k values, and selecting the k value corresponding to the maximum target value as the optimal k value. The embodiment of the invention sets the preset updating times to be 20, stops updating the k value when the updating times of the k value is equal to the preset updating times, and selects the k value corresponding to the largest target value from 21 target values as the optimal k value, wherein the k value corresponds to 20 updated k values, the initial k value is added to 20 updated k values, the total k values corresponds to 21 target values, and the k value corresponding to the largest target value is selected from 21 target values.
And clustering the multispectral image according to the gray values of the pixel points in the multispectral image by using the optimal k value as the k value in the k-means clustering algorithm, and accurately classifying different color areas in the multispectral image by using the pixel points with the same color as a category. Different colors in the multispectral image correspond to different substances, so that the areas of various substances in the food to be detected, namely the cranberries, are accurately obtained, the areas of dietary fibers in the cranberries are accurately obtained, and the content of the dietary fibers in the cranberries is determined.
In the embodiment of the invention, only cranberries are taken as an example, other foods can adopt the method and the device for enhancing the multispectral image, so that different color areas in the multispectral image are divided more clearly, the types of substances in the foods to be detected are accurately obtained, and the detection of the foods to be detected is facilitated.
The present invention has been completed.
In summary, the embodiment of the invention acquires the multispectral image of the food to be detected; setting a sliding window to slide in the multispectral image, acquiring gray level difference dispersion of the sliding window area according to gray level values of pixel points in the sliding window area, and screening out a fuzzy sliding window area; acquiring an optimal k value in a k-means clustering algorithm according to gray value distribution of pixel points in a fuzzy sliding window area; and clustering the multispectral images according to the optimal k value to obtain the areas of various substances in the food to be detected. According to the method, the multispectral images are clustered according to the optimal k value, so that the areas with different colors in the multispectral images are accurately classified, the areas of various substances in the food to be detected are accurately obtained, and the food to be detected is conveniently and accurately detected.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (8)

1. The intelligent enhancement method for the multispectral food image is characterized by comprising the following steps of:
acquiring multispectral images of food to be detected;
sliding windows with preset sizes are set to slide in the multispectral image according to preset step sizes, and gray level difference dispersion of each sliding window area is obtained according to gray level values of pixel points in each sliding window area; screening out a fuzzy sliding window area according to the gray level difference dispersion;
acquiring an optimal k value in a k-means clustering algorithm according to gray value distribution of pixel points in the fuzzy sliding window area; clustering the multispectral images according to the optimal k value to obtain areas of various substances in the food to be detected;
the method for obtaining the optimal k value comprises the following steps: setting the initial k value, selecting k target pixel points from each fuzzy sliding window area, acquiring target dispersion of each fuzzy sliding window area according to the gray value of the target pixel points in each fuzzy sliding window area, and acquiring a target value according to the size and distribution of the target dispersion; updating the initial k value according to a preset updating step length, acquiring an updated target value according to the updated k value, and taking the k value corresponding to the update as an optimal k value when the updated target value meets a preset condition;
the method for screening out the fuzzy sliding window area comprises the following steps:
sequencing the gray level difference dispersion of each sliding window area from small to large to obtain a dispersion sequence;
taking every two adjacent gray level difference dispersions in the dispersion sequence as a matching pair;
obtaining the difference degree of the matching pair according to the difference of the gray level difference dispersion in the matching pair;
obtaining the average value of the difference degrees of all the matching pairs as a standard value;
screening out a fuzzy sliding window area according to the standard value;
the target value obtaining method comprises the following steps:
acquiring the average value of all the target dispersions as an average dispersion;
normalizing the average dispersion to obtain an overall difference degree;
dividing equal target dispersion into the same type of dispersion, and obtaining the number of fuzzy sliding window areas corresponding to each type of dispersion as the type distribution number;
acquiring the total number of the fuzzy sliding window areas as fuzzy quantity;
taking the ratio of the type distribution quantity of each type of dispersion to the fuzzy quantity as the type probability of the corresponding type of dispersion;
acquiring entropy of the type probability as a target entropy;
the result of carrying out negative correlation and normalization on the target entropy is used as a target discrimination value;
and obtaining a target value according to the overall difference degree and the target discrimination value.
2. The intelligent enhancement method of a multispectral food image according to claim 1, wherein the method for obtaining the gray scale difference dispersion comprises the following steps:
optionally selecting a sliding window area as a reference sliding window area, and acquiring the absolute value of the difference value between the gray values of every two pixel points in the reference sliding window area as a gray level difference;
and acquiring the standard deviation of the gray scale difference as the gray scale difference dispersion of the reference sliding window area.
3. The intelligent enhancement method of multispectral food images according to claim 1, wherein the method for obtaining the difference degree is as follows:
and taking the absolute value of the difference value between the two gray level difference dispersions in the matching pair as the difference degree of the matching pair.
4. The intelligent enhancement method for multispectral food images according to claim 1, wherein the method for screening out the fuzzy sliding window area according to the standard value is as follows: arranging the positions of the matching pairs in the dispersion sequence according to the corresponding gray level difference dispersion to obtain a matching pair sequence;
according to the sequence of the matching pairs, the matching pair which first appears and is corresponding to the difference degree that the error of the standard value is smaller than the preset error is used as a critical matching pair;
taking the gray level difference dispersion larger than or equal to the second gray level difference dispersion in the critical matching pair in the dispersion sequence as an abnormal gray level difference dispersion; the arrangement order of the gray level difference dispersion in the matching pair is consistent with the arrangement order in the dispersion sequence;
and taking the sliding window area corresponding to the abnormal gray level difference dispersion as a fuzzy sliding window area.
5. The intelligent enhancement method of multispectral food images according to claim 2, wherein the method for selecting k target pixel points from each fuzzy sliding window area and obtaining the target dispersion of each fuzzy sliding window area according to the gray value of the target pixel point in each fuzzy sliding window area comprises the following steps:
optionally selecting a fuzzy sliding window area as a target fuzzy sliding window area, selecting k pixel points in the target fuzzy sliding window area as reference pixel points, and acquiring the gray level difference of every two arbitrary reference pixel points as a reference gray level difference;
acquiring the variance of the reference gray level difference as a reference variance;
changing reference pixel points to obtain all reference variances in the target fuzzy sliding window area;
taking k pixel points corresponding to the maximum reference variance as the target pixel points;
the maximum reference variance is taken as the target dispersion of the target fuzzy sliding window area.
6. The intelligent enhancement method for multispectral food images according to claim 1, wherein the method for obtaining the target value according to the overall difference degree and the target discrimination value is as follows:
and taking the addition result of the integral difference degree and the target discrimination value as a target value.
7. The intelligent enhancement method for multispectral food images according to claim 1, wherein the method for obtaining the optimal k value comprises the following steps:
increasing the initial k value by a preset updating step length to obtain an updated k value until a preset cut-off condition is met, and ending updating;
acquiring an updated target value according to the updated k value;
and taking the k value corresponding to the maximum target value as the optimal k value.
8. The intelligent enhancement method for multispectral food images according to claim 1, wherein the method for clustering the multispectral images according to the optimal k value to obtain the areas of various substances in the food to be detected comprises the following steps:
acquiring categories in the multispectral image through a k-means clustering algorithm according to the optimal k value;
the area corresponding to each category is the area of various substances in the food to be detected.
CN202311040354.4A 2023-08-18 2023-08-18 Multispectral food image intelligent enhancement method Active CN116758074B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311040354.4A CN116758074B (en) 2023-08-18 2023-08-18 Multispectral food image intelligent enhancement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311040354.4A CN116758074B (en) 2023-08-18 2023-08-18 Multispectral food image intelligent enhancement method

Publications (2)

Publication Number Publication Date
CN116758074A CN116758074A (en) 2023-09-15
CN116758074B true CN116758074B (en) 2024-04-05

Family

ID=87961281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311040354.4A Active CN116758074B (en) 2023-08-18 2023-08-18 Multispectral food image intelligent enhancement method

Country Status (1)

Country Link
CN (1) CN116758074B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117975444B (en) * 2024-03-28 2024-06-14 广东蛟龙电器有限公司 Food material image recognition method for food crusher

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708981A (en) * 2020-05-19 2020-09-25 北京航空航天大学 Graph triangle counting method based on bit operation
CN114581428A (en) * 2022-03-13 2022-06-03 江苏涂博士新材料有限公司 Powder coating adhesion degree detection method based on optical means
CN114994102A (en) * 2022-08-04 2022-09-02 武汉钰品研生物科技有限公司 X-ray-based food foreign matter traceless rapid detection method
CN115100203A (en) * 2022-08-25 2022-09-23 山东振鹏建筑钢品科技有限公司 Steel bar polishing and rust removing quality detection method
CN115330820A (en) * 2022-10-14 2022-11-11 江苏启灏医疗科技有限公司 Tooth image segmentation method based on X-ray film

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708981A (en) * 2020-05-19 2020-09-25 北京航空航天大学 Graph triangle counting method based on bit operation
CN114581428A (en) * 2022-03-13 2022-06-03 江苏涂博士新材料有限公司 Powder coating adhesion degree detection method based on optical means
CN114994102A (en) * 2022-08-04 2022-09-02 武汉钰品研生物科技有限公司 X-ray-based food foreign matter traceless rapid detection method
CN115100203A (en) * 2022-08-25 2022-09-23 山东振鹏建筑钢品科技有限公司 Steel bar polishing and rust removing quality detection method
CN115330820A (en) * 2022-10-14 2022-11-11 江苏启灏医疗科技有限公司 Tooth image segmentation method based on X-ray film

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Image Segmentation Using Transition Region and K-Means Clustering;Ahmad Wahyu Rosyadi et al.;IAENG InationalJouranl of Computer Science;20200331;全文 *
道路场景行人检测关键技术研究;徐哲炜;博士电子期刊;20210115(第1期);全文 *

Also Published As

Publication number Publication date
CN116758074A (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN111242961B (en) Automatic film reading method and system for PD-L1 antibody staining section
CN114549522A (en) Textile quality detection method based on target detection
EP3663975A1 (en) Method and system for learning pixel visual context from object characteristics to generate rich semantic images
CN115294113A (en) Wood veneer quality detection method
CN116758074B (en) Multispectral food image intelligent enhancement method
CN110161233B (en) Rapid quantitative detection method for immunochromatography test paper card
CN114627125B (en) Stainless steel tablet press surface quality evaluation method based on optical means
CN109415753B (en) Method and system for identifying gram type of bacteria
CN112215790A (en) KI67 index analysis method based on deep learning
CN115797352B (en) Tongue picture image processing system for traditional Chinese medicine health-care physique detection
CN115994907B (en) Intelligent processing system and method for comprehensive information of food detection mechanism
CN116703911B (en) LED lamp production quality detecting system
CN116883674B (en) Multispectral image denoising device and food quality detection system using same
CN116152242B (en) Visual detection system of natural leather defect for basketball
Naito et al. Identification and segmentation of myelinated nerve fibers in a cross-sectional optical microscopic image using a deep learning model
CN115294116A (en) Method, device and system for evaluating dyeing quality of textile material based on artificial intelligence
CN114677671A (en) Automatic identifying method for old ribs of preserved szechuan pickle based on multispectral image and deep learning
Barburiceanu et al. Grape leaf disease classification using LBP-derived texture operators and colour
Zuñiga et al. Grape maturity estimation based on seed images and neural networks
CN116559111A (en) Sorghum variety identification method based on hyperspectral imaging technology
CN116246174A (en) Sweet potato variety identification method based on image processing
CN117274293B (en) Accurate bacterial colony dividing method based on image features
CN117237747B (en) Hardware defect classification and identification method based on artificial intelligence
CN113658207A (en) Retinal vessel segmentation method and device based on guide filtering
Ji et al. Apple color automatic grading method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant