CN117934483B - Cod sausage production and processing quality detection method based on computer vision - Google Patents

Cod sausage production and processing quality detection method based on computer vision Download PDF

Info

Publication number
CN117934483B
CN117934483B CN202410338544.2A CN202410338544A CN117934483B CN 117934483 B CN117934483 B CN 117934483B CN 202410338544 A CN202410338544 A CN 202410338544A CN 117934483 B CN117934483 B CN 117934483B
Authority
CN
China
Prior art keywords
value
raw material
pixel
cluster
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410338544.2A
Other languages
Chinese (zh)
Other versions
CN117934483A (en
Inventor
刘勤
刘敏
张珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Jiameiyang Food Co ltd
Original Assignee
Qingdao Jiameiyang Food Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Jiameiyang Food Co ltd filed Critical Qingdao Jiameiyang Food Co ltd
Priority to CN202410338544.2A priority Critical patent/CN117934483B/en
Publication of CN117934483A publication Critical patent/CN117934483A/en
Application granted granted Critical
Publication of CN117934483B publication Critical patent/CN117934483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a method for detecting production and processing quality of cod intestines based on computer vision, which comprises the following steps: acquiring a raw material image in the cod sausage processing process, obtaining a saliency value of each pixel point according to the gray value difference of the pixel points around each pixel point in the raw material image, carrying out clustering operation on the raw material image according to the saliency value of each pixel point to obtain a saliency value of each cluster, obtaining a judging threshold value of the raw material image according to the difference of the saliency values of all the clusters, and carrying out threshold segmentation according to the judging threshold value to obtain an impurity region in the raw material image. According to the invention, the accuracy of impurity region segmentation is improved through the self-adaptive judgment threshold value.

Description

Cod sausage production and processing quality detection method based on computer vision
Technical Field
The invention relates to the technical field of image processing, in particular to a cod intestine production processing quality detection method based on computer vision.
Background
The cod sausage is an important marine product, has the characteristics of high protein and low fat, is widely applied to the fields of food processing and diet, effectively detects the production and processing quality of the cod sausage, and can ensure the quality and safety of the product. The traditional cod sausage quality detection generally depends on manual visual inspection or simple mechanical detection equipment, and has the problems of low efficiency, strong subjectivity, susceptibility to human factors and the like. Computer vision technology has been rapidly developed in recent years, with significant results in image processing, pattern recognition, object detection, and the like. By means of computer vision technology, the quality in the production and processing process of the cod intestines can be automatically and accurately detected. In the production and processing of cod intestines, impurities may be mixed in the raw materials, so that the raw materials for producing the cod intestines need to be subjected to impurity detection.
In the prior art, the saliency detection (CA) algorithm is a commonly used image segmentation algorithm, the goal of which is to accurately detect the most attractive and salient objects or regions from an image. It is intended to identify the most prominent objects or regions in an image by efficient integration of color and texture information. It combines color contrast and spatial attention mechanisms, enabling accurate localization of salient regions in complex scenes. The algorithm obtains salient regions by calculating the salient value of each region and then screening by a fixed threshold. In contrast, when the image of the cod intestine is segmented, since there may be a plurality of impurity regions, the screening by the fixed threshold value may be inaccurate, and accuracy of detecting the production and processing quality of the cod intestine based on the computer vision may be reduced.
Disclosure of Invention
The invention provides a cod sausage production and processing quality detection method based on computer vision, which aims to solve the existing problems.
The invention discloses a cod sausage production and processing quality detection method based on computer vision, which adopts the following technical scheme:
The embodiment of the invention provides a method for detecting production and processing quality of cod intestines based on computer vision, which comprises the following steps of:
acquiring a raw material image in the processing process of the cod intestines;
obtaining a significance value of each pixel point according to the gray value difference of the pixel points around each pixel point in the raw material image;
Clustering the raw material images according to the significance value of each pixel point to obtain the significance value of each cluster;
obtaining a judging threshold value of the raw material image according to the difference of the significance values of all the clusters;
And carrying out threshold segmentation according to the judgment threshold value to obtain the impurity region in the raw material image.
Further, the method for obtaining the saliency value of each pixel point according to the gray value difference of the pixels around each pixel point in the raw material image comprises the following specific steps:
constructing three windows corresponding to each pixel point in the raw material image;
and obtaining the significance value of each pixel point according to the gray values of all the pixel points in the three windows corresponding to each pixel point.
Further, three windows corresponding to each pixel point are constructed in the raw material image, and the method comprises the following specific steps:
Respectively constructing a raw material image with any pixel point as a center into the size of 、/>Is a window of (2); wherein/>For the preset first window side length,/>For the preset second window side length,/>The third window side length is preset.
Further, according to the gray values of all the pixel points in the three windows corresponding to each pixel point, the significance value of each pixel point is obtained, which comprises the following specific steps:
Obtaining a first difference of each window corresponding to each pixel point according to the difference between gray values of all the pixel points in each window corresponding to each pixel point;
Obtaining a second difference of each pixel point according to the differences among the three windows corresponding to each pixel point;
and obtaining the significance value of each pixel point according to the first difference of each window corresponding to each pixel point and the second difference of each pixel point.
Further, according to the first difference of each window corresponding to each pixel point and the second difference of each pixel point, a significance value of each pixel point is obtained, and a corresponding specific calculation formula is as follows:
wherein, Represents the/>Significance value of individual pixel points,/>Represents the/>Gray value of each pixel/(Represent the firstFirst pixel/>Under window (s)/>Gray value of each pixel/(Represents the/>First pixel/>Variance of gray values of pixel points in each window,/>Represents the/>The average value of the gray values of the pixels in the first window of pixels,Represents the/>First pixel/>Average value of gray values of pixel points in each window,/>Represents the/>The first pixel pointNumber of pixel points contained in each window,/>A linear normalization function is represented and,For/>A second difference of the pixel points; /(I)For/>First pixel/>First difference of windows,/>As a function of absolute value.
Further, clustering operation is carried out on the raw material images according to the significance value of each pixel point, and the significance value of each cluster is obtained, which comprises the following specific steps:
clustering the raw material image by using a DBSCAN density clustering algorithm according to the significance value of each pixel point in the raw material image to obtain a plurality of clusters;
and obtaining the significance value of each cluster according to the difference of the significance values of the pixel points among the clusters.
Further, according to the difference of the saliency values of the pixel points among the clusters, the saliency value of each cluster is obtained, and the corresponding specific formula is as follows:
In the method, in the process of the invention, Represents the/>Significance value of individual clusters,/>Represents the/>Average saliency value of all pixel points in each cluster,/>Represents the division number/>, in the raw material imageOut of clusters/>Average saliency value of all pixel points in each cluster,/>Represents the/>Variance of saliency values of all pixels in each cluster,/>Represents the/>Median value of ranking from small to large of significance values of all pixel points in each clusterRepresenting the number of clusters in the raw material image,/>Representing a linear normalization function,/>As a function of absolute value.
Further, the judging threshold value of the raw material image is obtained according to the difference of the significance values of all the clusters, and the method comprises the following specific steps:
obtaining a distance characteristic value of the cluster according to the distance between the cluster centers of the cluster in the raw material image;
And obtaining a judgment threshold according to the distance characteristic value of the cluster and the significance value of the cluster.
Further, according to the distance characteristic value of the cluster and the significance value of the cluster, a judgment threshold value is obtained, and the corresponding specific formula is as follows:
wherein, Representing a judgment threshold value/>Mean value of significance values representing all clusters,/>Representing the maximum value of significance values in all clusters,/>Representing the minimum of significance values in all clusters,/>Represents the/>Significance value of individual clusters,/>Expressed in raw material image as-Euclidean distance between cluster center of each cluster and cluster center of cluster with maximum significance value,/>Representing the number of clusters in the raw material image,Is the distance characteristic value of the cluster,/>As a function of absolute value.
Further, the method for obtaining the impurity region in the raw material image by threshold segmentation according to the judgment threshold comprises the following specific steps:
and marking all cluster clusters with the significance value of the cluster clusters in the raw material image larger than the judgment threshold value as impurity areas.
The technical scheme of the invention has the beneficial effects that:
In the embodiment of the invention, a raw material image in the processing process of the cod sausage is acquired; obtaining a significance value of each pixel point according to the gray value difference of the pixel points around each pixel point in the raw material image, and calculating the significance value of the pixel point to improve the accuracy of the self-adaptive judgment threshold value, so that the impurity region in the raw material image is detected by using the accurate judgment threshold value; clustering the raw material images according to the significance value of each pixel point to obtain the significance value of each cluster; obtaining a judging threshold value of the raw material image according to the difference of the significance values of all the clusters; and carrying out threshold segmentation according to the judgment threshold value to obtain the impurity region in the raw material image. The method improves the accuracy of threshold segmentation of the raw material image by improving the self-adaptive judgment threshold of the impurity region in the detected raw material image, thereby improving the accuracy of detection of the production and processing quality of the cod intestines based on computer vision.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart showing the steps of a method for detecting production and processing quality of cod intestines based on computer vision.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a method for detecting the production and processing quality of the cod intestine based on computer vision, which is provided by the invention, with reference to the accompanying drawings and the preferred embodiment, and the specific implementation, structure, characteristics and effects thereof are described in detail below. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a specific scheme of a cod intestine production processing quality detection method based on computer vision, which is specifically described below with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for detecting production and processing quality of cod intestines based on computer vision according to an embodiment of the invention is shown, the method includes the following steps:
and S001, acquiring a raw material image in the processing process of the cod intestines.
The main purpose of this example is to detect impurities in raw materials during the processing of cod intestines. Firstly, raw material images in the cod sausage processing process need to be acquired, and the raw material images can be shot through an industrial camera in a overlooking mode. Because the environment of the processing shop is complex, noise may exist in the acquired image, and thus noise reduction is required. Here, the image is subjected to noise reduction processing by using a median filtering algorithm to obtain a clear raw material image. The median filtering algorithm is a well known technique and the specific method will not be described in detail here.
Step S002, obtaining the significance value of each pixel point according to the gray value difference of the pixels around each pixel point in the raw material image.
In the cod sausage processing process, the detection of impurities in raw materials is critical. Firstly, the cod is required to be treated, viscera and fish heads are removed, and the cod is thoroughly cleaned, so that the surface is ensured to be clean. The cod is then fed into a mill by a conveyor belt for milling. Heretofore, a saliency detection algorithm may be used to detect foreign objects in fish flesh that combines color contrast and spatial attention mechanisms to accurately identify the most prominent objects or regions in the image. By calculating the significance value of each pixel point and clustering the pixel points according to the significance values, the impurity regions possibly existing in the image can be effectively positioned. The process can ensure the quality and safety of cod sausage products.
Specifically, the preset first window side length in this embodimentPreset second window side length/>Preset third window side length/>In the description of this example, other values may be set in other embodiments, and the present example is not limited thereto. Each pixel point is taken as a center to respectively construct the size of/>、/>/>Similar to a concentric circular structure. And then under different windows, obtaining the significance value of each pixel point through the gray level difference of the central pixel point and the surrounding pixel points.
The calculation formula is as follows:
wherein, Represents the/>Significance value of individual pixel points,/>Represents the/>Gray value of each pixel/(Represent the firstFirst pixel/>Under window (s)/>Gray value of each pixel/(Represents the/>First pixel/>Variance of gray values of pixel points in each window,/>Represents the/>The average value of the gray values of the pixels in the first window of pixels,Represents the/>First pixel/>Average value of gray values of pixel points in each window,/>Represents the/>Three windows constructed by pixel points,/>Represents the/>First pixel/>The number of pixel points included in each window is described as "/>Representing two windows of the three windows constructed, except the first window,/>Representing a linear normalization function.
In particular, the method comprises the steps of,Representing the/>, in the raw material imagePixel dot and/>First pixel/>The/>, in the windowThe larger the difference value of the gray values of the pixel points is, the larger the difference value is, which shows that the gray difference between the current pixel point and the neighborhood pixel point is larger, so that the pixel point is more remarkable,/>Represents the/>Pixel dot and/>First pixel/>The average value of gray differences of all pixel points under each window; /(I)Represents the/>Gray scale ratio of each pixel point is higher than the first/>First pixel/>The larger the ratio of the gray variance of each window, the description of the/>First pixel/>The smaller the variance of the windows, and therefore the/>First pixel/>The smaller the gray level variation degree of each window, the more remarkable the pixel point.And representing the average value of the characteristic values of the current pixel point and the pixel points under different scale windows, wherein the larger the average value is, the more remarkable the current pixel point is. /(I)Represents the/>Gray average value and/>, of first window of each pixel pointFirst pixel/>The larger the difference of the gray average values of the windows, the description of the difference is expressed as the/>In the process of outwards extending the pixel points, the larger the gray level change degree is, the larger the gray level difference value of the current pixel point is. /(I)For/>A second difference of the pixel points; For/> First pixel/>A first difference for the windows.
Thereby, a saliency value for each pixel point can be obtained.
And step S003, clustering the raw material images according to the significance value of each pixel point to obtain the significance value of each cluster.
It should be noted that, the significance value of each pixel point is obtained through the above calculation, and then the pixels points are clustered according to the significance value, where a DBSCAN density clustering algorithm is used, a preset cluster radius of this embodiment is 3, a minimum cluster number is 4, which is described by way of example, other values may be set in other embodiments, this embodiment is not limited, but the DBSCAN clustering algorithm measures only according to the distance when the clustering is performed, and the pixels points are uniformly distributed in this embodiment, so that when the clustering is performed, the difference value of the significance values of adjacent pixels is used as a measurement condition. The DBSCAN density clustering algorithm is a common known technique, and the specific method is not described in detail here.
For example: taking any one pixel point as a clustering starting point, and presetting a judgment threshold value within the range of the clustering radius, wherein the clustering radius is 3This embodiment is described by way of example, and other values may be set in other embodiments, and the embodiment is not limited thereto. When the difference value between the surrounding pixel points and the significance value of the clustering starting point is smaller than the judging threshold value T1, the pixel points are used as the clustering pixel points of the clustering starting point, and then judgment is carried out sequentially, so that all the pixel points meeting the significance value judging condition range of the starting point are obtained.
Further, a plurality of clusters of the raw material image and a cluster center of each cluster may be obtained, and then the saliency value of each cluster is calculated, respectively.
The calculation formula is as follows:
wherein, Represents the/>Significance value of individual clusters,/>Represents the/>Average saliency value of all pixel points in each cluster,/>Represents the division number/>, in the raw material imageOut of clusters/>Average saliency value of all pixel points in each cluster,/>Represents the/>Variance of saliency values of all pixels in each cluster,/>Represents the/>Median value of ranking from small to large of significance values of all pixel points in each clusterRepresenting the number of clusters in the raw material image,/>Representing a linear normalization function,/>As a function of absolute value.
In particular, the method comprises the steps of,Represents the/>Average saliency value of all pixel points in each cluster and dividing/>Out of clusters/>The larger the difference value of the average saliency values of all pixel points in the clustering clusters is, the larger the difference value is, which means that the difference between the two clustering clusters is larger, so that the difference is more obvious, and the saliency value is larger,/>Represents the/>Average value of difference value of average saliency value of each cluster and all pixels in other clusters. /(I)The normalization of the variance is represented, and the greater the variance of the significance values of all the pixels in the cluster, the greater the degree of confusion of the pixels in the cluster is, so that the overall significance is greater; and multiply by/>The larger the difference between the mean and the median is, which indicates that there may be local saliency regions in the cluster, and therefore/>The mean saliency value of the region is shown corrected to be closer to the median value, so that the saliency value of the region is evaluated more accurately.
To this end, a significance value for each cluster is obtained.
And S004, obtaining a judgment threshold value of the raw material image according to the difference of the significance values of all the clusters.
The salient regions in the image are obtained by determining the judgment threshold according to the salient values of the different regions. Since the salient region is visually represented as a more prominent region, it can be more easily observed, and therefore the larger the salient value, the more likely it is an impurity existing during processing of cod intestines, and therefore the judgment threshold value is obtained from the salient values of different regions, the calculation formula is as follows:
wherein, Representing a judgment threshold value/>Mean value of significance values representing all clusters,/>Representing the maximum value of significance values in all clusters,/>Representing the minimum of significance values in all clusters,/>Represents the/>Significance value of individual clusters,/>Expressed in the raw material image and the/>Euclidean distance between cluster center of cluster and cluster center of cluster with maximum significance value,/>Representing the number of clusters in the raw material image,/>As a function of absolute value.
In particular, the method comprises the steps of,The median value of the significance values representing all clusters in the raw material image,The larger the difference value between the mean value and the median value is, the larger the difference value is, which indicates that a local extremum exists in the raw material image, and the larger the local extremum is, which indicates that the salient value of the region is larger, therefore, when the judgment threshold value is determined, the local extremum should be more biased so that other regions can not be divided as salient regions, and the other regions are not dividedThe correction weight of the judgment threshold value is represented. /(I)And the distance characteristic value of the cluster is represented.
So far, the judgment threshold value of the raw material image is obtained through the difference of the saliency values of different areas.
Step S005, performing threshold segmentation according to the determination threshold value to obtain the impurity region in the raw material image.
And (3) carrying out threshold segmentation on the raw material image according to the judgment threshold value, and obtaining all cluster clusters with the significance value larger than the judgment threshold value in the raw material image, namely, significance areas, namely, impurity areas, and marking, packing and uploading the position quantity of the areas so as to facilitate the subsequent staff to process the areas.
The present invention has been completed.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the invention, but any modifications, equivalent substitutions, improvements, etc. within the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. The method for detecting the production and processing quality of the cod sausage based on computer vision is characterized by comprising the following steps of:
acquiring a raw material image in the processing process of the cod intestines;
obtaining a significance value of each pixel point according to the gray value difference of the pixel points around each pixel point in the raw material image;
Clustering the raw material images according to the significance value of each pixel point to obtain the significance value of each cluster;
obtaining a judging threshold value of the raw material image according to the difference of the significance values of all the clusters;
Performing threshold segmentation according to the judgment threshold value to obtain an impurity region in the raw material image;
The judging threshold value of the raw material image is obtained according to the difference of the significance values of all the clusters, and the method comprises the following specific steps:
obtaining a distance characteristic value of the cluster according to the distance between the cluster centers of the cluster in the raw material image;
Obtaining a judgment threshold according to the distance characteristic value of the cluster and the significance value of the cluster;
The judgment threshold value is obtained according to the distance characteristic value of the cluster and the significance value of the cluster, and the corresponding specific formula is as follows:
wherein, Representing a judgment threshold value/>Mean value of significance values representing all clusters,/>Representing the maximum value of significance values in all clusters,/>Representing the minimum of significance values in all clusters,/>Represents the/>Significance value of individual clusters,/>Expressed in raw material image as-Euclidean distance between cluster center of each cluster and cluster center of cluster with maximum significance value,/>Representing the number of clusters in the raw material image,Is the distance characteristic value of the cluster,/>As a function of absolute value.
2. The method for detecting the production and processing quality of the cod intestines based on the computer vision according to claim 1, wherein the method for obtaining the saliency value of each pixel point according to the gray value difference of the pixels around each pixel point in the raw material image comprises the following specific steps:
constructing three windows corresponding to each pixel point in the raw material image;
and obtaining the significance value of each pixel point according to the gray values of all the pixel points in the three windows corresponding to each pixel point.
3. The method for detecting the production and processing quality of the cod intestines based on computer vision according to claim 2, wherein the steps of constructing three windows corresponding to each pixel point in the raw material image comprise the following specific steps:
Respectively constructing a raw material image with any pixel point as a center into the size of 、/>AndIs a window of (2); wherein/>For the preset first window side length,/>For the preset second window side length,/>The third window side length is preset.
4. The method for detecting the production and processing quality of the cod intestines based on the computer vision according to claim 2, wherein the obtaining the saliency value of each pixel point according to the gray values of all the pixel points in three windows corresponding to each pixel point comprises the following specific steps:
Obtaining a first difference of each window corresponding to each pixel point according to the difference between gray values of all the pixel points in each window corresponding to each pixel point;
Obtaining a second difference of each pixel point according to the differences among the three windows corresponding to each pixel point;
and obtaining the significance value of each pixel point according to the first difference of each window corresponding to each pixel point and the second difference of each pixel point.
5. The method for detecting the production and processing quality of the cod intestines based on the computer vision according to claim 4, wherein the significance value of each pixel point is obtained according to the first difference of each window corresponding to each pixel point and the second difference of each pixel point, and the corresponding specific calculation formula is as follows:
wherein, Represents the/>Significance value of individual pixel points,/>Represents the/>Gray value of each pixel/(Represents the/>First pixel/>Under window (s)/>Gray value of each pixel/(Represents the/>First pixel/>Variance of gray values of pixel points in each window,/>Represents the/>Average value of pixel gray values in first window of each pixelRepresents the/>First pixel/>Average value of gray values of pixel points in each window,/>Represents the/>First pixel/>Number of pixel points contained in each window,/>Representing a linear normalization function,/>For/>A second difference of the pixel points; /(I)For/>First pixel/>First difference of windows,/>As a function of absolute value.
6. The method for detecting the production and processing quality of the cod intestines based on computer vision according to claim 1, wherein the clustering operation is performed on raw material images according to the significance value of each pixel point to obtain the significance value of each cluster, and the method comprises the following specific steps:
clustering the raw material image by using a DBSCAN density clustering algorithm according to the significance value of each pixel point in the raw material image to obtain a plurality of clusters;
and obtaining the significance value of each cluster according to the difference of the significance values of the pixel points among the clusters.
7. The method for detecting the production and processing quality of the cod intestines based on the computer vision according to claim 6, wherein the significance value of each cluster is obtained according to the difference of the significance values of the pixel points among the clusters, and the corresponding specific formula is as follows:
In the method, in the process of the invention, Represents the/>Significance value of individual clusters,/>Represents the/>Average saliency value of all pixel points in each cluster,/>Represents the division number/>, in the raw material imageOut of clusters/>Average saliency value of all pixel points in each cluster,/>Represents the/>Variance of saliency values of all pixels in each cluster,/>Represents the/>Median value of ranking from small to large of significance values of all pixel points in each clusterRepresenting the number of clusters in the raw material image,/>Representing a linear normalization function,/>As a function of absolute value.
8. The method for detecting the production and processing quality of the cod intestines based on the computer vision according to claim 1, wherein the step of obtaining the impurity region in the raw material image by threshold segmentation according to the judgment threshold comprises the following specific steps:
and marking all cluster clusters with the significance value of the cluster clusters in the raw material image larger than the judgment threshold value as impurity areas.
CN202410338544.2A 2024-03-25 2024-03-25 Cod sausage production and processing quality detection method based on computer vision Active CN117934483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410338544.2A CN117934483B (en) 2024-03-25 2024-03-25 Cod sausage production and processing quality detection method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410338544.2A CN117934483B (en) 2024-03-25 2024-03-25 Cod sausage production and processing quality detection method based on computer vision

Publications (2)

Publication Number Publication Date
CN117934483A CN117934483A (en) 2024-04-26
CN117934483B true CN117934483B (en) 2024-05-31

Family

ID=90761471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410338544.2A Active CN117934483B (en) 2024-03-25 2024-03-25 Cod sausage production and processing quality detection method based on computer vision

Country Status (1)

Country Link
CN (1) CN117934483B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311292A (en) * 2022-10-12 2022-11-08 南通创铭伊诺机械有限公司 Strip steel surface defect detection method and system based on image processing
CN116030034A (en) * 2023-02-21 2023-04-28 青岛精锐机械制造有限公司 Visual identification method for valve surface defects
CN116091504A (en) * 2023-04-11 2023-05-09 山东泰和能源股份有限公司 Connecting pipe connector quality detection method based on image processing
CN116993724A (en) * 2023-09-26 2023-11-03 卡松科技股份有限公司 Visual detection method for coal mine industrial gear oil based on image filtering
CN117333489A (en) * 2023-12-01 2024-01-02 苏州普洛泰科精密工业有限公司 Film damage detection device and detection system
CN117351008A (en) * 2023-12-04 2024-01-05 深圳市阿龙电子有限公司 Smart phone panel surface defect detection method
CN117392469A (en) * 2023-12-11 2024-01-12 深圳市丰源升科技有限公司 Perovskite battery surface coating detection method and system based on machine vision
CN117557569A (en) * 2024-01-12 2024-02-13 吉林交通职业技术学院 Road pavement construction quality detection method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311292A (en) * 2022-10-12 2022-11-08 南通创铭伊诺机械有限公司 Strip steel surface defect detection method and system based on image processing
CN116030034A (en) * 2023-02-21 2023-04-28 青岛精锐机械制造有限公司 Visual identification method for valve surface defects
CN116091504A (en) * 2023-04-11 2023-05-09 山东泰和能源股份有限公司 Connecting pipe connector quality detection method based on image processing
CN116993724A (en) * 2023-09-26 2023-11-03 卡松科技股份有限公司 Visual detection method for coal mine industrial gear oil based on image filtering
CN117333489A (en) * 2023-12-01 2024-01-02 苏州普洛泰科精密工业有限公司 Film damage detection device and detection system
CN117351008A (en) * 2023-12-04 2024-01-05 深圳市阿龙电子有限公司 Smart phone panel surface defect detection method
CN117392469A (en) * 2023-12-11 2024-01-12 深圳市丰源升科技有限公司 Perovskite battery surface coating detection method and system based on machine vision
CN117557569A (en) * 2024-01-12 2024-02-13 吉林交通职业技术学院 Road pavement construction quality detection method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视觉显著性特征的乳腺肿块检测方法;裴晓敏;季久玉;刘文进;;光电子・激光;20170131(01);全文 *

Also Published As

Publication number Publication date
CN117934483A (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN114913176B (en) Flexible leather material scab defect detection method and system based on artificial intelligence
CN110738631A (en) prawn shape information detection method and detection system based on images
Mhaski et al. Determination of ripeness and grading of tomato using image analysis on Raspberry Pi
CN111080696B (en) Underwater sea cucumber identification and positioning method based on computer vision
Salem et al. Segmentation of retinal blood vessels based on analysis of the hessian matrix and clustering algorithm
CN116188468B (en) HDMI cable transmission letter sorting intelligent control system
CN114926410A (en) Method for detecting appearance defects of brake disc
CN110570404A (en) Sobel operator-based cloth defect detection method
CN114677671A (en) Automatic identifying method for old ribs of preserved szechuan pickle based on multispectral image and deep learning
Rachmawanto et al. Eggs classification based on egg shell image using k-nearest neighbors classifier
CN117934483B (en) Cod sausage production and processing quality detection method based on computer vision
CN117522864B (en) European pine plate surface flaw detection method based on machine vision
CN110942444A (en) Object detection method and device
CN113313681A (en) Injection molding quality detection system
CN115994870B (en) Image processing method for enhancing denoising
CN114494405A (en) Strawberry shape estimation and automatic grading method and device and computer storage medium
CN111046726B (en) Underwater sea cucumber identification and positioning method based on AI intelligent vision
CN108198140A (en) Three-dimensional collaboration filtering and noise reduction method based on NCSR models
CN115187790A (en) Image contour extraction method based on reference region binarization result
CN110827272B (en) Tire X-ray image defect detection method based on image processing
Huang et al. Identifying apple surface defects based on Gabor features and SVM using machine vision
CN115797345B (en) Seafood baking abnormality identification method
CN117455909B (en) Automatic fish body disease detection method for fish in and out
CN114998565B (en) Semi-finished shrimp size detection device and method
CN112365532A (en) High-precision cherry size grading method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant