CN111915704A - Apple hierarchical identification method based on deep learning - Google Patents
Apple hierarchical identification method based on deep learning Download PDFInfo
- Publication number
- CN111915704A CN111915704A CN202010538807.6A CN202010538807A CN111915704A CN 111915704 A CN111915704 A CN 111915704A CN 202010538807 A CN202010538807 A CN 202010538807A CN 111915704 A CN111915704 A CN 111915704A
- Authority
- CN
- China
- Prior art keywords
- image
- apple
- deep learning
- target
- apples
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000013135 deep learning Methods 0.000 title claims abstract description 20
- 241000220225 Malus Species 0.000 claims abstract description 172
- 235000021016 apples Nutrition 0.000 claims abstract description 33
- 230000007547 defect Effects 0.000 claims abstract description 27
- 238000001514 detection method Methods 0.000 claims abstract description 26
- 238000002372 labelling Methods 0.000 claims abstract description 26
- 238000012549 training Methods 0.000 claims abstract description 21
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 230000009193 crawling Effects 0.000 claims abstract description 5
- 238000007689 inspection Methods 0.000 claims abstract description 4
- 238000012545 processing Methods 0.000 claims description 31
- 238000003709 image segmentation Methods 0.000 claims description 23
- 238000000605 extraction Methods 0.000 claims description 15
- 230000011218 segmentation Effects 0.000 claims description 15
- 238000003708 edge detection Methods 0.000 claims description 11
- 230000008859 change Effects 0.000 claims description 7
- 238000000926 separation method Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 claims description 5
- 241000238631 Hexapoda Species 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 235000013399 edible fruits Nutrition 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 230000001629 suppression Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 description 11
- 230000000694 effects Effects 0.000 description 9
- QJPWUUJVYOJNMH-UHFFFAOYSA-N homoserine lactone Chemical compound NC1CCOC1=O QJPWUUJVYOJNMH-UHFFFAOYSA-N 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 7
- 238000012360 testing method Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 5
- 238000009499 grossing Methods 0.000 description 5
- 235000010724 Wisteria floribunda Nutrition 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 238000005336 cracking Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000003706 image smoothing Methods 0.000 description 2
- 238000011897 real-time detection Methods 0.000 description 2
- 235000002566 Capsicum Nutrition 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005669 field effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003711 image thresholding Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Abstract
The invention discloses an apple hierarchical identification method based on deep learning, which comprises the following steps: step one, constructing an apple training data set: 1. crawling apple image data; 2. preprocessing an image; step two, apple target detection: 1. selecting an apple graph in the apple dataset constructed in the step one as inspection data, and performing data training by using a Darknet frame; 2. after the training is finished, the apple photos are shot by using a mobile phone, and the apple position detection and labeling are carried out on the photos; step three, detecting the surface defects of the apples: taking a single apple picture after screenshot as an input image, independently extracting each positioned apple, and positioning the apple according to four surface defects; and step four, identifying the apples in a grading manner. Compared with the prior art, the invention has the following advantages: 1. a lighter weight; 2. the expansibility is strong; 3. is closer to the living demand.
Description
Technical Field
The invention relates to an apple classification identification method.
Background
As shown in fig. 1, the defect forms of the apples mainly include wormholes, apple peel scratches, apple peel cracks and decay, wherein: the wormholes are small defect points on the apples, are relatively dark in color and are distributed on the surfaces of the apples in the form of discrete points; the scratches on the outer skin of the apple are slender defect areas, the color of the defects is relatively light, and the color texture of the scratches is less different from that of the surface of the apple; the cracking of the apple peel is a large area of damage to the apple surface, which may be relatively dark or light in color, and is manifested as a large area of damage to the apple surface; the rotting is large-area surface damage, and the rotted area on the surface of the apple is also deep and large-area damage due to the rotting reason.
The apple is sold in a stacked mode, so that all objects are relatively close to each other, and the scene has the following characteristics:
(1) the distance between each target is short;
(2) there are more variations in the target color.
In order to reflect the interference of the error marking to the information, the error marking mode shown in fig. 2 has the following errors:
(1) the font color of the target labeling scheme is lighter, the difference with the target background is smaller, and the human eyes cannot observe directly.
(2) In the two apples under the drawing, the labeling positions of the two targets are too close to each other, and the association between the targets and the labels cannot be effectively distinguished.
(3) The useful information of the target can be shielded by overlarge marked fonts, so that the visual sense is more disordered.
Disclosure of Invention
In order to accurately distinguish the position of the apple from the actual apple information, the invention provides an apple hierarchical identification method based on deep learning.
The purpose of the invention is realized by the following technical scheme:
an apple hierarchical identification method based on deep learning comprises the following steps:
step one, constructing an apple training data set
1. Crawling apple image data
Using Python3.0 to crawl pictures of webpages with keywords of 'apples' in the Baidu picture search, and storing crawled data files in local apple images;
2. image pre-processing
By observing the crawled pictures, the pictures with a plurality of apples in the same image are segmented by utilizing an image processing technology, so that each image has only one apple, and the method comprises the following specific steps:
(1) selecting appropriate color channel
Converting the collected apple image from an RGB mode to an HSL mode, carrying out HSL three-channel separation, and adopting an S channel component as an input signal source for subsequent image processing;
(2) image graying
Further graying the apple image by adopting a weighted average method, separating the obtained RGB three-channel image, setting a red channel as R (i, j), a green channel as G (i, j) and a blue channel as B (i, j), continuously merging the images according to the three channels to obtain a grayscale image f (i, j), wherein the specific formula is as follows:
f(i,j)=0.3R(i,j)+0.59G(i,j)+0.11(i,j);
(3) image denoising
Carrying out smooth denoising by adopting a weighted average filtering method, continuously carrying out smooth processing on the image by utilizing a sliding window, and finally obtaining the image after noise suppression by processing each pixel point and the neighborhood thereof; the noise suppression map F (i, j) is obtained using the following formula:
wherein f (i, j) is the original image, k and l are the length and width of the sliding template, and w (i, j) is the weight of the template;
(4) image segmentation
The method comprises the steps of automatically selecting an optimal global threshold value for image segmentation by using an automatic threshold value segmentation method of a gray level image, dividing an original image into a foreground image and a background image by using the threshold value, wherein when the optimal threshold value is selected, the difference between the background and the foreground is the largest, and the optimal segmentation threshold value of the current image is obtained by calculating the maximum inter-class variance between the foreground and the background for each gray level;
(5) contour extraction
Selecting a Canny edge detection operator to realize edge detection, finally selecting points with large amplitude variation to generate fragmented edges, then detecting all the generated fragmented edges by adopting a dual-threshold algorithm, and sequentially connecting the generated fragmented edges to extract the edges of the target object;
(6) area extraction
Assuming that the length of the target region in the image is M and the width is N, the pixel value (0 or 1) is represented by B (i, j), i and j respectively refer to the abscissa and the ordinate of the pixel, and the object area is calculated by the following formula:
step two, apple target detection
1. Selecting an apple graph in the apple dataset constructed in the step one as inspection data, and performing data training by using a Darknet frame;
2. after the training is finished, the apple photos are shot by using the mobile phone, the apple position detection is carried out on the pictures, and the labeling is carried out, wherein the specific method comprises the following steps:
(1) and (3) utilizing a rectangular frame to perform segmentation positioning on the detected target: the method comprises the steps of obtaining detected boundary information of an apple target through a YOLO training model, and detecting the minimum circumscribed regular rectangle of the boundary information, wherein the method comprises the steps of respectively obtaining the maximum abscissa and the minimum abscissa, and the maximum ordinate and the minimum ordinate of the boundary information of the target, so that the boundary positioning of the minimum circumscribed regular rectangle can be obtained;
(2) uniformly placing the labeling information at the upper left corner of each target rectangular frame: after the target rectangle is positioned, obtaining the coordinate value of the upper left corner of the target rectangle frame, namely the minimum abscissa and the minimum ordinate of the target boundary, and drawing a filling rectangle frame outwards as a marking information background on the basis of the coordinate value;
(3) the color scheme of green and white is used for the labeling information: after the marking coordinates are determined and the background filling operation is completed, writing the detected apple names on a green background in a white character mode to complete the apple detection marking;
step three, detecting the surface defects of the apples
Taking a single apple picture after screenshot as an input image, independently extracting each positioned apple, and positioning the four surface defects, wherein the specific method comprises the following steps:
1. insect eye
Carrying out gray level transformation on an image to obtain an image gray level image, then carrying out image segmentation on the gray level image through gray level change, carrying out contour extraction on a segmented binary image, and carrying out red filling and labeling on positions with wormholes through contour positions:
2. scratch mark
Firstly, denoising the whole image, performing binarization segmentation on the obtained image by utilizing the difference between the denoised image and the original image, and performing red filling and marking on the position of a scratch;
3. cracks and rot
Performing space color conversion on the image, converting an RGB space into an HSL space, performing image segmentation on an S space, and filling and marking the region with apple peel cracks and rot by using a thick red rectangular frame;
step four, apple classification identification
1. According to the fruit industry standard and the actual experimental capability and the specific classification industry clause standard of the apple, the apple is divided into a special grade, a first grade and a second grade;
2. setting the conforming super grade, the first grade and the second grade as GOOD and the non-conforming BAD;
3. marking the color matching mode by using green characters with blue bottoms as GOOD, and marking the color matching mode by using red characters with blue bottoms as BAD;
4. after the information of the surface defects of the apples is accurately obtained, GOOD and BAD grading is carried out on the apples.
Compared with the prior art, the invention has the following advantages:
1. and (3) lighter weight: compared with the conventional apple detection scheme applied to agriculture and light industry, the apple detection method is more boundary, and apple detection grading can be performed by only one mobile terminal with a camera.
2. The expansibility is strong: all the schemes of the invention use the existing popular open source platform as technical support, and compared with the traditional apple rating which can only be applied to the generation scene, the invention has higher expansibility.
3. More close to the living demand: the conventional apple grading is not suitable for being used in life of people, and the apple grading device does not have larger equipment, professional light sources and photographing equipment, can bring convenience for people to buy apples in life, and is extremely convenient to use.
Drawings
FIG. 1 shows the defect pattern of an apple;
FIG. 2 shows the result of error labeling;
FIG. 3 is a labeled result of the present invention;
FIG. 4 is a wormhole, scratch fill;
FIG. 5 shows a rotting, cracking filling;
FIG. 6 is a hierarchical annotation of apples;
FIG. 7 shows the RGB three-channel separation result;
FIG. 8 shows the result of HSL three-channel separation extraction;
FIG. 9 shows the result of the graying process;
FIG. 10 shows the image denoising result;
FIG. 11 is an image segmentation process;
FIG. 12 illustrates an image segmentation effect;
FIG. 13 is an edge detection process;
FIG. 14 is an apple crawl picture;
FIG. 15 is a single apple image obtained after segmentation;
FIG. 16 is a photograph of an apple taken with a cell phone;
FIG. 17 shows the result of apple position detection;
FIG. 18 is an original apple image with moth-eye;
FIG. 19 is moth eye contour information;
FIG. 20 is the result of wormhole filling;
FIG. 21 is an original apple image with scratches;
FIG. 22 shows the result of binary segmentation of the scratch;
FIG. 23 shows scratch fill results;
FIG. 24 is an image of an original apple with cracks and rot;
FIG. 25 shows the results of the transition between HSLs;
FIG. 26 is the HS space image segmentation result;
FIG. 27 shows crack and rot filling results;
fig. 28 shows apple classification test results.
Detailed Description
The technical solution of the present invention is further described below with reference to the accompanying drawings, but not limited thereto, and any modification or equivalent replacement of the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention shall be covered by the protection scope of the present invention.
The invention provides an apple hierarchical identification method based on deep learning, which aims to accurately distinguish target positions from actual target information due to the labeling, so the invention provides the following labeling limitations:
(a) the positions of each target are reasonably distinguished, so that human eyes can clearly distinguish the targets;
(b) the marked information is contrasted with the background in a bright color, so that the marked information is beneficial to reading;
(c) as a plurality of targets exist in the same scene, each target is required to be marked with a fixed and unique position, so that overlapping marking is avoided, and normal information is interfered.
Therefore, the design scheme is as follows:
(a) dividing and positioning the detected target by using the rectangular frame;
(b) uniformly placing the labeling information at the upper left corner of each target rectangular frame;
(c) using a color scheme of green and white for the labeling information;
the labeling results of this scheme are shown in FIG. 3.
The method specifically comprises the following steps:
first, apple surface defect marking
According to the fruit industry standard, the actual experimental capability and the classification industry clause standard of specific apples, the surface defects of the apples mainly comprise several conditions such as insect eyes, scratches, cracks, decay and the like.
Aiming at the characteristics of each defect, the labeling scheme of the apple surface defect is as follows:
1. wormhole, scratch: and (3) filling red in the positions where the wormholes and the scratches exist in the surface area of the apple with the wormholes and the scratches by using an image processing technology (figure 4) so as to play a role in highlighting.
2. Rotting and cracking: since the rot and crack are large-area defects, if the marking mode of red filling is still used, the expression is influenced, and therefore, a thick red rectangular box (figure 5) is used for marking the area with the rot and crack.
Second, hierarchical labeling of apple
As shown in table 1, apples can be classified into super, first, and second grades according to fruit industry standards and actual experimental capabilities, and classification industry clause standards of specific apples. However, since this scheme is used for people to pick the best apple in the market environment, the labeling scheme is set to two levels: wherein, the conforming super, primary and secondary are GOOD, while the non-conforming BAD.
TABLE 1
In this labeling scheme, the color scheme is labeled with green with blue bottom as GOOD and red with blue bottom as BAD (FIG. 6).
Third, image processing
The method takes red Fuji apples as a research object, and performs image preprocessing after acquiring an apple image. The preprocessing operation comprises the steps of noise removal of the image, selection of a proper color channel, image segmentation, contour extraction and the like. The preprocessed image can extract and eliminate various interferences for the feature parameter extraction of the later stage grading, and the grading accuracy is improved.
1. Color space
And carrying out RGB three-channel separation processing on the collected red Fuji apple picture in OpenCV. Fig. 7 shows a grayscale map of the original image and the extracted result.
The HSL model is one of the common models, which is a color standard in the industry, and various colors are obtained by changing 3 color channels of hue h (hue), saturation s (saturation), and lightness l (brightness) and superimposing them on each other. The collected red fuji apple picture is converted from the RGB mode to the HSL mode, and HSL three-channel separation is performed, and the extracted result is shown in fig. 8.
For the original color image collected by the image acquisition system, a small-area shadow is inevitably generated around the apple body due to different test environments or unsatisfactory installation effect of the illumination system; in the grayed image, the R, G, B channel component extraction results are shown in fig. 7, and shadows are displayed in different degrees, which greatly interferes with the image segmentation step in the later period and greatly influences the sorting precision and accuracy of the sorting system. However, as can be seen from the extraction result of the 8H, S, L channel component in fig. 8, the background part of the S channel component extraction result is displayed in pure black except the apple main body, and the difference from the main body, that is, the foreground part is large, and the foreground and the background can be easily divided by using the thresholding division method. The S-channel component is used here as an input signal source for subsequent image processing.
2. Image graying
Because the related defects on the apples are unrelated to colors, and the unprocessed images contain abundant color information, the information amount is too large, and the subsequent processing of the images is not facilitated. Therefore, the invention introduces the image graying processing, the grayed image reduces a large amount of information compared with the non-grayed image, the occupation of the memory space is greatly reduced, and the workload of calculation processing is correspondingly reduced, thereby being capable of more conveniently carrying out the image processing operation and greatly improving the detection efficiency.
The human eye perceives slightly differently for different colors, with green being the most sensitive. The method comprises the following steps of performing weighted average on three gray values by using different weight values through the acquired gray values of three channels of the RGB color image, so as to obtain a more appropriate gray value, wherein a specific formula is as follows:
f(i,j)=0.3R(i,j)+0.59G(i,j)+0.11B(i,j) (1);
where f (i, j) is the generated gray scale map, R (i, j) is the original image R channel, G (i, j) is the original image G channel, and B (i, j) is the original image B channel.
Since the effect achieved by the weighted average method is closer to that of human eyes, the invention further performs graying processing on the apple image by the weighted average method (fig. 9).
3. Image denoising
Noise in the image can cause a reduction in image quality with uncertainty. Before the image segmentation operation, more unintended objects may be detected if the noise is not removed, because the noise is usually represented as a small point in the image, which can be segmented as an object. Typically the sensor and scanner circuitry will generate such noise. This change in brightness or color can be expressed as different noise types, such as gaussian noise, spike noise, and shot noise. Because the image acquisition system is easy to generate salt and pepper noise in the experiment, the noise source generally comes from two aspects:
(1) in the process of image acquisition
In the process of collecting images by two common types of image sensors, namely a CCD (charge coupled device) and a CMOS (complementary metal oxide semiconductor), various noises can be introduced due to the influence of the material properties of the sensors, the working environment, electronic components, circuit structures and the like, such as thermal noise caused by resistance, channel thermal noise of a field effect tube, photon noise, dark current noise and photoresponse non-uniformity noise.
(2) In the transmission process of image signals
Digital images are often contaminated with various noises during their transmission recording due to imperfections of transmission media and recording devices, etc. In addition, noise may also be introduced into the resulting image when the input object is not as desired at some stage of image processing.
Denoising, also known as smoothing, is intended to suppress noise or other small fluctuations. However, edge details of the image may be blurred while suppressing noise. The gradient operator is based on the local derivative of the image function, and the use of the gradient operator will make the image edges sharper, but at the same time will also raise the noise. It can be seen that the roles of both the smoothing and gradient operators are relative. Image smoothing is typically noise-suppressed by averaging luminance values in the neighborhood. However, for the purpose of suppressing noise without affecting edge information, a smoothing method capable of maintaining an edge is considered here, that is, averaging is performed using only points in the field having similar properties to the processed points. The invention adopts a weighted average filtering method to carry out smooth denoising.
The weighted average filtering algorithm is a local smoothing algorithm which can keep the details of the image edge. The important points to be considered are the selection of the size, the shape and the direction of the field, the selection of the weight coefficient of each point and the like. This kind of method for selecting different weights for each point is called weighted average method. And calling the pixel point p (i, j) to be processed in the center of the neighborhood as a center pixel point. The general principle of selecting the weight is as follows:
(1) the central pixel p (i, j) is given a larger weight, and the weights of other pixels are smaller.
(2) And determining the weight according to the distance from the central pixel point p (i, j). The closer pixel points are endowed with larger weights, and the farther pixel points are endowed with smaller weights.
(3) And determining the weight according to the gray value proximity degree of the central pixel point p (i, j). The closer the gray value is to the pixel point, the greater the weight is given, otherwise, the smaller the weight is given.
The improved algorithm below takes the inverse of the gray gradient as a weight, and is referred to as a weighted average inverse of the gradient weighted average algorithm for short. Let f (i, j) and 3 × 3 regions be filter windows. In-domain grayscale matrix DfComprises the following steps:
matrix W with inverse of corresponding gray gradient as weightf(i, j) is:
wherein the content of the first and second substances,
note that the condition (k, l) ≠ 0,0) in the calculation of w (i + k, j + l) and d (i + k, j + l) for equations (4) and (5), i.e., k and l cannot be equal to 0 at the same time. Finally, the smoothed image F (i, j) after weighted averaging is:
after the noise is filtered, an image as shown in fig. 10 is obtained.
4. Image segmentation
In the conventional thresholding image segmentation method, the input image is often a grayed image of the original image. To achieve a good image segmentation effect, the difference between the foreground and the background of the grayed image is required to be high, and the specific segmentation threshold value can be determined finally by testing. An automated thresholding segmentation algorithm (Otsu algorithm) is proposed herein in conjunction with the above. In computer vision and image processing, Otsu's algorithm is used to automatically perform cluster-based image thresholding, or to convert grayscale images to binary images. The Otsu algorithm assumes that the image contains two classes of pixels (foreground and background) and then computes an optimal threshold to separate the two classes so that their combined distribution (internal variance) is minimal and hence the inter-class variance between them is maximal. The specific flow of the Otsu algorithm is as follows:
step 1: counting the number of each pixel in the gray level;
step 2: calculating the probability distribution of each pixel in the whole image;
and step 3: traversing and searching the gray level, and calculating the inter-class probability of the foreground and the background under the current gray value;
and 4, step 4: calculating inter-class variance under different gray levels;
and 5: the gray level when the inter-class variance is maximum is selected as the global threshold of the image.
According to the selection of the color channel of the apple image, the original color image is selected, the noise is removed, the image of the S channel component is extracted, the optimal global threshold value for image segmentation is automatically selected by using an Otsu algorithm, and the effect of automatically thresholding the segmented image can be realized. The method has wide application range, can accurately segment and extract the apple main body image for the image with the white or black background, is insensitive to the influence of shadow in the image, and can accurately segment the foreground. The image segmentation process is shown in fig. 11.
The results of processing the red fuji image according to the image segmentation flow shown in fig. 11 are shown in fig. 12. Therefore, the method can achieve a good image segmentation effect on the apples.
5. Contour extraction
The features of an image can be broadly divided into two categories: visual features and statistical features. The statistical features are manually customized and are features that can be obtained through some simple transformations. Visual features are the most natural features and also a class of features that a person can visually perceive. Such as the outline, brightness, or even texture of an object. The edge refers to the boundary between the primitive and the primitive, the object and the object, and the background, and for an image, the most basic feature is the edge. The image edge is one of the important features to be extracted in the image processing process.
An edge is a property that is assigned to a single pixel, and it has both "magnitude (intensity)" and "direction". The edge detection of an object is actually to extract a junction line between the target object and the background, and the junction line is very obvious in characteristic that the gray value thereof changes sharply. Since the gradation distribution gradient of the image can reflect such a sharp change, the function of the local image can be differentiated to extract the edge. The process of edge detection is shown in fig. 13.
The Canny edge detection operator is selected, and is a novel edge detection operator with good detection performance. Moreover, the Canny edge detection operator can achieve edge detection without raising noise.
The Canny operator first requires smoothing of the image using the first derivative of a two-dimensional gaussian function. If the image coordinate is (x, y), the two-dimensional Gaussian function is G (x, y), the original image is I (x, y), and the new image gray value is IG(x, y), then the two-dimensional Gaussian function G (x, y) is:
let the convolved image be IG(x, y), then the result of the image convolution is:
wherein, sigma represents a scale parameter, and the larger sigma is, the larger the range of smooth denoising is; conversely, the smaller the image smoothing and denoising range is.
The gradient magnitude M and gradient direction of the image are calculated using finite differences of first order partial derivatives. Partial derivatives are taken at points (I, J), where the partial derivative in the x-direction is Gx(i, j) the partial derivative in the y-direction is Gy(i,j):
The gradient magnitude M at point I (I, j) is then:
the gradient direction α at point I (I, j) is:
and finally, selecting points with large amplitude change to generate fragmentary edges, detecting all the generated fragmentary edges by adopting a dual-threshold algorithm, and sequentially connecting the fragmentary edges to extract the edges of the target object.
6. Area extraction
The simplest and most natural region attribute is the area of the object, which can be calculated from the number of pixels bounded by the target object boundary. Let the length of the target area in the image be M and the width be N, the pixel value (0 or 1) be represented by B (i, j), i, j respectively refer to the horizontal and vertical coordinates of the pixel. The processed image is a binary image, and for the binary image, the object area can be calculated by the following formula:
example (b):
first, apple training data set construction
1. Crawling apple image data
And (3) crawling pictures on the webpage with the keyword of apple in the hundred-degree picture search by utilizing Python3.0, storing crawled data files in a local folder, wherein the number of crawled pictures is 10000. In the stored local apple images, images such as single-family apple images, multiple apple images or images with characters and the like exist, interference can be brought to the establishment of a subsequent apple image sample library, and therefore the crawled images are subjected to subsequent processing by utilizing an image processing technology.
2. Image pre-processing
The purpose of image preprocessing is to segment a picture in which multiple apples appear in the same image so that there is one and only one apple in each image.
By observing the crawled pictures and utilizing an image processing technology, the preprocessing scheme is as follows:
(1) threshold segmentation processing is performed for each graph.
(2) And extracting the image contour after threshold segmentation.
(3) And analyzing the outline shape by using an outline decision formula, and if E <50, determining that the outline is in an apple shape, namely a partial circle.
E=|(XMax-Xmin)-(yMax-yMin)| (13);
Wherein E is the horizontal and vertical difference of the contour, the unit is pixel, XMax is the maximum horizontal coordinate of the contour, Xmin is the minimum horizontal coordinate of the contour, yMax is the maximum vertical coordinate, and yMin is the minimum vertical coordinate.
(4) And cutting the outline meeting the requirements.
The apple is divided as required as shown in fig. 15, and 17124 single apple images are obtained.
3. Image normalization
After 17124 single apple images are obtained, the original image size of each segmented image is different due to the size of the apple, so that the pixel scale and the color of each image are different, and the training effect is poor if the training is performed by using the data. And (3) carrying out normalization processing on each single apple, wherein the normalization processing step is to adjust the size of each graph to 64 x 64.
Second, experimental environment
A large amount of matrix operations exist in the deep learning network training calculation process, the common CPU calculation speed cannot meet the training requirement of the network model, and the deep learning model training is mainly carried out through GPU acceleration at present.
Table 2 shows configuration information in this embodiment: darknet is a deep learning framework sourced by the original Yolo author. Darknet is a neural network computing framework implemented by the native C language in conjunction with CUDA programming. The CUDA is a GPU operation platform proposed by Nvidia, and can provide matrix operation acceleration support for convolution calculation, pooling calculation and normalized activation function calculation in the deep learning process. In this embodiment, the Darknet source code is combined with the CUDA9.0 operation platform and the cudnn7.1 patch package corresponding to the CUDA9.0 operation platform to complete compiling, so that the Darknet and Keras framework can perform GPU computation acceleration through CUDA + cudnn during training and reasoning. The environment used and configured in this example is ubuntu16.04+ darknet + CUDA9.0 + cudnn7.1, and all experiments were completed using the Nvidia Tesla V100 graphics card operating mode.
TABLE 2
Name (R) | Correlation arrangement |
Operating system | Ubuntu16.04 |
CPU | Intel Xeon |
Memory device | 128GB |
GPU | NVIDIA Tesla |
GPU acceleration library | CUDA9.0 CUDNN7.1 |
Deep learning framework | YOLO DarkNet |
Third, apple target detection experiment
1000 apple pictures in the apple data set are selected as inspection data, and a Darknet frame is used for data training. After the training is completed, a picture is taken with the cell phone, as shown in fig. 16.
The apple position detection was performed on the graph and labeled to obtain the results shown in fig. 17.
In a real-life scene, the application scene of the solution of the embodiment is on the mobile terminal, so that the mobile phone camera is used for obtaining images, in order to obtain the best detection effect, the consideration factors are set to be the processing time of a single image and the detection accuracy, the test is carried out according to the size of the input image, and the detection accuracy is the accuracy of detecting the test data set by using the training model.
TABLE 3
Image resolution | Using a processor | Single sheet detection time | Rate of accuracy of detection |
3876*2584 | GPU | 2.2s | 98% |
3072*2304 | GPU | 1.1s | 97% |
2580*1936 | GPU | 0.6s | 95% |
1600*1200 | GPU | 0.1s | 95% |
640*480 | GPU | 0.08s | 70% |
As can be seen from table 3, under the condition that the accuracy is the best, the higher the resolution is selected, the better the detection effect is, but because the image is larger and the processed data is more, the processing time of a single picture is longer, the video real-time detection cannot be performed, and the application scene is not met. In the case of priority on speed, the smaller picture is faster, but the accuracy is also reduced. In summary, the resolution of 1600 × 1200 is adopted as the optimal scheme, the single-sheet detection time of 0.1s meets the real-time detection standard, and the detection accuracy is high.
Fourth, apple defect detection experiment
And after target detection, each positioned apple is independently extracted so as to detect surface defects, positioning is carried out on four surface defects, and the input image is a single apple picture after screenshot.
1. Insect eye
The original apple image with wormholes is shown in fig. 18. Since the color of the wormhole is black and can present an obvious contrast with the background, the image is subjected to gray level conversion to obtain an image gray level image, then the image segmentation is performed on the gray level image through gray level change, and the contour extraction is performed on the segmented binary image, as shown in fig. 19. It can be seen that two pieces of contour information are obtained after processing, the outermost contour is removed, and then the contour information of the defect of the wormhole can be obtained, and the wormhole is filled and labeled according to the labeling scheme of the invention through the contour position (fig. 20).
2. Scratch mark
The original apple image with the scratch is shown in fig. 21. Since scratches are not noticeable in color, they cannot be treated in the same way as with moth eyes. Firstly, denoising the whole image as an image, and since the scratch pair changes the gray scale change of the apple surface, the scratch can be regarded as the noise of the apple surface, and the image obtained by the denoising process is subjected to binarization segmentation by utilizing the difference between the denoised image and the original image to obtain an image 22. The highlighted parts in the figures are the locations of the scratches, which are filled in and marked according to the marking scheme of the present invention (fig. 23).
3. Cracks and rot
The original apple image with cracks and rot is shown in fig. 24. The visible cracks and decay are large-area apple surface color deviations, and the image is subjected to space color conversion from an RGB space to an HSL space, as shown in FIG. 25. By performing image segmentation on the S space, the result shown in fig. 26 is obtained. The highlighted parts in the figure are the positions of cracks and decays of the outer skin of the apple, which are filled and marked according to the marking scheme of the invention (figure 27).
Five, apple classification experiment
This example uses 1600 x 1200 video images taken with a huaboei MATE20 handset for solution testing, where the video includes two apples with better quality and three apples with defects, and the test results are shown in fig. 28. According to the input video image, the scheme successfully positions the 5 apples, marks the apples by using a green rectangular frame, and positions the defect positions of the three apples, wherein two wormholes and one broken apple are positioned, and one rotten apple is accurately marked. After the information of the apple surface defects is accurately obtained, GOOD and BAD grading is carried out on the apples, and the consumers are given clear apple purchase priority.
Claims (10)
1. An apple hierarchical identification method based on deep learning is characterized by comprising the following steps:
step one, constructing an apple training data set
1. Crawling apple image data
Using Python3.0 to crawl pictures of webpages with keywords of 'apples' in the Baidu picture search, and storing crawled data files in local apple images;
2. image pre-processing
By observing the crawled pictures, the pictures with a plurality of apples in the same image are segmented by using an image processing technology, so that each image has only one apple;
step two, apple target detection
1. Selecting an apple graph in the apple dataset constructed in the step one as inspection data, and performing data training by using a Darknet frame;
2. after the training is finished, the apple photos are shot by using a mobile phone, and the apple position detection and labeling are carried out on the photos;
step three, detecting the surface defects of the apples
Taking a single apple picture after screenshot as an input image, independently extracting each positioned apple, and positioning the apple according to four surface defects of wormholes, scratches, cracks and decay;
step four, apple classification identification
1. According to the fruit industry standard and the actual experimental capability and the specific classification industry clause standard of the apple, the apple is divided into a special grade, a first grade and a second grade;
2. setting the conforming super grade, the first grade and the second grade as GOOD and the non-conforming BAD;
3. marking the color matching mode by using green characters with blue bottoms as GOOD, and marking the color matching mode by using red characters with blue bottoms as BAD;
4. after the information of the surface defects of the apples is accurately obtained, GOOD and BAD grading is carried out on the apples.
2. The deep learning based apple hierarchical identification method according to claim 1, wherein the image preprocessing comprises the following specific steps:
(1) selecting appropriate color channel
Converting the collected apple image from an RGB mode to an HSL mode, carrying out HSL three-channel separation, and adopting an S channel component as an input signal source for subsequent image processing;
(2) image graying
Carrying out further graying processing on the apple image by adopting a weighted average method, separating the obtained RGB three-channel image, and continuously merging the images according to the three channels to obtain a grayscale image f (i, j);
(3) image denoising
Carrying out smooth denoising by adopting a weighted average filtering method, continuously carrying out smooth processing on the image by utilizing a sliding window, and finally obtaining the image after noise suppression by processing each pixel point and the neighborhood thereof;
(4) image segmentation
The method comprises the steps of automatically selecting an optimal global threshold value for image segmentation by using an automatic threshold value segmentation method of a gray level image, dividing an original image into a foreground image and a background image by using the threshold value, wherein when the optimal threshold value is selected, the difference between the background and the foreground is the largest, and the optimal segmentation threshold value of the current image is obtained by calculating the maximum inter-class variance between the foreground and the background for each gray level;
(5) contour extraction
Selecting a Canny edge detection operator to realize edge detection, finally selecting points with large amplitude variation to generate fragmented edges, then detecting all the generated fragmented edges by adopting a dual-threshold algorithm, and sequentially connecting the generated fragmented edges to extract the edges of the target object;
(6) area extraction
Assuming that the length of the target region in the image is M and the width is N, the pixel value (0 or 1) is represented by B (i, j), i and j respectively refer to the abscissa and the ordinate of the pixel, and the object area is calculated by the following formula:
3. the deep learning based apple hierarchical recognition method according to claim 2, wherein the gray level map f (i, j) is calculated by the following formula:
f(i,j)=0.3R(i,j)+0.59G(i,j)+0.11(i,j);
wherein R (i, j) is a red channel, G (i, j) is a green channel, and B (i, j) is a blue channel.
5. The deep learning-based apple hierarchical identification method according to claim 2, wherein the specific method for labeling the positions of the apples is as follows:
(1) dividing and positioning the detected target by using the rectangular frame;
(2) uniformly placing the labeling information at the upper left corner of each target rectangular frame;
(3) a color scheme of green and white is used for the labeling information.
6. The deep learning based apple hierarchical identification method according to claim 5, wherein the method for segmenting and positioning the detected target by using the rectangular frame is as follows: the method comprises the steps of obtaining the detected boundary information of the apple target through a YOLO training model, and detecting the minimum circumscribed regular rectangle of the boundary information.
7. The deep learning based apple hierarchical recognition method according to claim 5, wherein the method of uniformly placing the labeling information at the upper left corner of each target rectangular box is as follows: after the target rectangle is positioned, the coordinate value of the upper left corner of the target rectangle frame, namely the minimum abscissa and the minimum ordinate of the target boundary, is obtained, and based on the coordinate value, the filling rectangle frame is drawn outwards to serve as the background of the labeling information.
8. The deep learning based apple hierarchical identification method according to claim 5, wherein the color scheme using green-white for the labeling information is as follows: and after the marking coordinates are determined and the background filling operation is finished, writing the detected apple name on a green background in a white character mode to finish the apple detection marking.
9. The deep learning based apple hierarchical identification method according to claim 2, wherein the automatic threshold segmentation method comprises the following specific steps:
step 1: counting the number of each pixel in the gray level;
step 2: calculating the probability distribution of each pixel in the whole image;
and step 3: traversing and searching the gray level, and calculating the inter-class probability of the foreground and the background under the current gray value;
and 4, step 4: calculating inter-class variance under different gray levels;
and 5: the gray level when the inter-class variance is maximum is selected as the global threshold of the image.
10. The deep learning-based apple classification identification method according to claim 1, wherein the four surface defects of wormholes, scratches, cracks and decays are located by the following specific methods:
1. insect eye
Carrying out gray level transformation on an image to obtain an image gray level image, then carrying out image segmentation on the gray level image through gray level change, carrying out contour extraction on a segmented binary image, and carrying out red filling and labeling on positions with wormholes through contour positions:
2. scratch mark
Firstly, denoising the whole image, performing binarization segmentation on the obtained image by utilizing the difference between the denoised image and the original image, and performing red filling and marking on the position of a scratch;
3. cracks and rot
And performing space color conversion on the image, converting the RGB space into the HSL space, performing image segmentation on the S space, and filling and marking the region with apple peel cracks and rot by using a thick red rectangular frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010538807.6A CN111915704A (en) | 2020-06-13 | 2020-06-13 | Apple hierarchical identification method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010538807.6A CN111915704A (en) | 2020-06-13 | 2020-06-13 | Apple hierarchical identification method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111915704A true CN111915704A (en) | 2020-11-10 |
Family
ID=73237518
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010538807.6A Pending CN111915704A (en) | 2020-06-13 | 2020-06-13 | Apple hierarchical identification method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111915704A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112200805A (en) * | 2020-11-11 | 2021-01-08 | 北京平恒智能科技有限公司 | Industrial product image target extraction and defect judgment method |
CN112362673A (en) * | 2020-11-17 | 2021-02-12 | 清华大学天津高端装备研究院洛阳先进制造产业研发基地 | Visual detection method and system for dumplings |
CN112560896A (en) * | 2020-11-19 | 2021-03-26 | 安徽理工大学 | Fruit quality screening and classifying system based on image processing |
CN112561886A (en) * | 2020-12-18 | 2021-03-26 | 广东工业大学 | Automatic workpiece sorting method and system based on machine vision |
CN112580583A (en) * | 2020-12-28 | 2021-03-30 | 深圳市普汇智联科技有限公司 | Automatic calibration method and system for billiard design and color identification parameters |
CN113051992A (en) * | 2020-11-16 | 2021-06-29 | 泰州无印广告传媒有限公司 | Uniform speed identification system applying transparent card slot |
CN113158969A (en) * | 2021-05-10 | 2021-07-23 | 上海畅选科技合伙企业(有限合伙) | Apple appearance defect identification system and method |
CN113177925A (en) * | 2021-05-11 | 2021-07-27 | 昆明理工大学 | Method for nondestructive detection of fruit surface defects |
CN113319013A (en) * | 2021-07-08 | 2021-08-31 | 陕西科技大学 | Apple intelligent sorting method based on machine vision |
CN113569922A (en) * | 2021-07-08 | 2021-10-29 | 陕西科技大学 | Intelligent lossless apple sorting method |
CN113643287A (en) * | 2021-10-13 | 2021-11-12 | 深圳市巨力方视觉技术有限公司 | Fruit sorting method, device and computer readable storage medium |
CN114268621A (en) * | 2021-12-21 | 2022-04-01 | 东方数科(北京)信息技术有限公司 | Deep learning-based digital instrument meter reading method and device |
CN115816460A (en) * | 2022-12-21 | 2023-03-21 | 苏州科技大学 | Manipulator grabbing method based on deep learning target detection and image segmentation |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101713747A (en) * | 2009-11-23 | 2010-05-26 | 华东交通大学 | Thermal infrared imaging technology based method and device for detecting the early defect of fruit surface |
CN101984346A (en) * | 2010-10-19 | 2011-03-09 | 浙江大学 | Method of detecting fruit surface defect based on low pass filter |
CN103363893A (en) * | 2012-03-26 | 2013-10-23 | 新疆农业大学 | Method for detecting size of Fuji apple |
CN105354847A (en) * | 2015-11-10 | 2016-02-24 | 浙江大学 | Fruit surface defect detection method based on adaptive segmentation of sliding comparison window |
JP2016095160A (en) * | 2014-11-12 | 2016-05-26 | Jfeスチール株式会社 | Surface defect detection method and surface defect detection device |
CN105718945A (en) * | 2016-01-20 | 2016-06-29 | 江苏大学 | Apple picking robot night image identification method based on watershed and nerve network |
CN105891231A (en) * | 2015-01-26 | 2016-08-24 | 青岛农业大学 | Carrot surface defect detection method based on image processing |
CN107094933A (en) * | 2017-04-27 | 2017-08-29 | 湖北苗仙聚生物科技有限公司 | A kind of citron tea and its production and use |
CN109663747A (en) * | 2019-01-16 | 2019-04-23 | 郑州轻工业学院 | A kind of Chinese chestnut small holes caused by worms intelligent detecting method |
CN110148122A (en) * | 2019-05-17 | 2019-08-20 | 南京东奇智能制造研究院有限公司 | Apple presentation quality stage division based on deep learning |
CN111076670A (en) * | 2019-12-03 | 2020-04-28 | 北京京仪仪器仪表研究总院有限公司 | Online nondestructive testing method for internal and external quality of apples |
-
2020
- 2020-06-13 CN CN202010538807.6A patent/CN111915704A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101713747A (en) * | 2009-11-23 | 2010-05-26 | 华东交通大学 | Thermal infrared imaging technology based method and device for detecting the early defect of fruit surface |
CN101984346A (en) * | 2010-10-19 | 2011-03-09 | 浙江大学 | Method of detecting fruit surface defect based on low pass filter |
CN103363893A (en) * | 2012-03-26 | 2013-10-23 | 新疆农业大学 | Method for detecting size of Fuji apple |
JP2016095160A (en) * | 2014-11-12 | 2016-05-26 | Jfeスチール株式会社 | Surface defect detection method and surface defect detection device |
CN105891231A (en) * | 2015-01-26 | 2016-08-24 | 青岛农业大学 | Carrot surface defect detection method based on image processing |
CN105354847A (en) * | 2015-11-10 | 2016-02-24 | 浙江大学 | Fruit surface defect detection method based on adaptive segmentation of sliding comparison window |
CN105718945A (en) * | 2016-01-20 | 2016-06-29 | 江苏大学 | Apple picking robot night image identification method based on watershed and nerve network |
CN107094933A (en) * | 2017-04-27 | 2017-08-29 | 湖北苗仙聚生物科技有限公司 | A kind of citron tea and its production and use |
CN109663747A (en) * | 2019-01-16 | 2019-04-23 | 郑州轻工业学院 | A kind of Chinese chestnut small holes caused by worms intelligent detecting method |
CN110148122A (en) * | 2019-05-17 | 2019-08-20 | 南京东奇智能制造研究院有限公司 | Apple presentation quality stage division based on deep learning |
CN111076670A (en) * | 2019-12-03 | 2020-04-28 | 北京京仪仪器仪表研究总院有限公司 | Online nondestructive testing method for internal and external quality of apples |
Non-Patent Citations (5)
Title |
---|
于蒙等: "基于图像识别的苹果等级分级研究", 《自动化与仪表》 * |
何芳: "基于视觉的番木瓜外观品质检测技术研究", 《中国优秀硕士学位论文全文数据库 农业科技辑》 * |
杜恩明等: "基于机器视觉的自动分拣码放系统研究", 《包装工程》 * |
武星等: "基于轻量化YOLO V3卷积神经网络的苹果检测方法", 《HTTP://KNS.CNKI.NET/KCMS/DETAIL/11.1964.S.20200526.1513.006.HTML》 * |
童旭: "基于机器视觉水果表面等级分类识别的研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅰ辑》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112200805A (en) * | 2020-11-11 | 2021-01-08 | 北京平恒智能科技有限公司 | Industrial product image target extraction and defect judgment method |
CN113051992B (en) * | 2020-11-16 | 2022-01-18 | 山东米捷软件有限公司 | Uniform speed identification system applying transparent card slot |
CN113051992A (en) * | 2020-11-16 | 2021-06-29 | 泰州无印广告传媒有限公司 | Uniform speed identification system applying transparent card slot |
CN112362673A (en) * | 2020-11-17 | 2021-02-12 | 清华大学天津高端装备研究院洛阳先进制造产业研发基地 | Visual detection method and system for dumplings |
CN112560896A (en) * | 2020-11-19 | 2021-03-26 | 安徽理工大学 | Fruit quality screening and classifying system based on image processing |
CN112561886A (en) * | 2020-12-18 | 2021-03-26 | 广东工业大学 | Automatic workpiece sorting method and system based on machine vision |
CN112580583A (en) * | 2020-12-28 | 2021-03-30 | 深圳市普汇智联科技有限公司 | Automatic calibration method and system for billiard design and color identification parameters |
CN112580583B (en) * | 2020-12-28 | 2024-03-15 | 深圳市普汇智联科技有限公司 | Automatic calibration method and system for billiard ball color recognition parameters |
CN113158969A (en) * | 2021-05-10 | 2021-07-23 | 上海畅选科技合伙企业(有限合伙) | Apple appearance defect identification system and method |
CN113177925A (en) * | 2021-05-11 | 2021-07-27 | 昆明理工大学 | Method for nondestructive detection of fruit surface defects |
CN113319013A (en) * | 2021-07-08 | 2021-08-31 | 陕西科技大学 | Apple intelligent sorting method based on machine vision |
CN113569922A (en) * | 2021-07-08 | 2021-10-29 | 陕西科技大学 | Intelligent lossless apple sorting method |
CN113643287A (en) * | 2021-10-13 | 2021-11-12 | 深圳市巨力方视觉技术有限公司 | Fruit sorting method, device and computer readable storage medium |
CN113643287B (en) * | 2021-10-13 | 2022-03-01 | 深圳市巨力方视觉技术有限公司 | Fruit sorting method, device and computer readable storage medium |
CN114268621A (en) * | 2021-12-21 | 2022-04-01 | 东方数科(北京)信息技术有限公司 | Deep learning-based digital instrument meter reading method and device |
CN114268621B (en) * | 2021-12-21 | 2024-04-19 | 东方数科(北京)信息技术有限公司 | Digital instrument meter reading method and device based on deep learning |
CN115816460A (en) * | 2022-12-21 | 2023-03-21 | 苏州科技大学 | Manipulator grabbing method based on deep learning target detection and image segmentation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111915704A (en) | Apple hierarchical identification method based on deep learning | |
CN110349126B (en) | Convolutional neural network-based marked steel plate surface defect detection method | |
CN113160192B (en) | Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background | |
CN107230202B (en) | Automatic identification method and system for road surface disease image | |
CN113781402B (en) | Method and device for detecting scratch defects on chip surface and computer equipment | |
CN104751142B (en) | A kind of natural scene Method for text detection based on stroke feature | |
CN109255344B (en) | Machine vision-based digital display type instrument positioning and reading identification method | |
CN102426649B (en) | Simple steel seal digital automatic identification method with high accuracy rate | |
CN111179243A (en) | Small-size chip crack detection method and system based on computer vision | |
CN109409355B (en) | Novel transformer nameplate identification method and device | |
CN107784669A (en) | A kind of method that hot spot extraction and its barycenter determine | |
CN107491730A (en) | A kind of laboratory test report recognition methods based on image procossing | |
CN111415363A (en) | Image edge identification method | |
CN111161222B (en) | Printing roller defect detection method based on visual saliency | |
CN112132196B (en) | Cigarette case defect identification method combining deep learning and image processing | |
CN112734761B (en) | Industrial product image boundary contour extraction method | |
CN113221881B (en) | Multi-level smart phone screen defect detection method | |
CN113298809B (en) | Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation | |
CN113034474A (en) | Test method for wafer map of OLED display | |
CN112991283A (en) | Flexible IC substrate line width detection method based on super-pixels, medium and equipment | |
CN114926407A (en) | Steel surface defect detection system based on deep learning | |
CN113609984A (en) | Pointer instrument reading identification method and device and electronic equipment | |
CN108154496B (en) | Electric equipment appearance change identification method suitable for electric power robot | |
CN113033558A (en) | Text detection method and device for natural scene and storage medium | |
CN113392819B (en) | Batch academic image automatic segmentation and labeling device and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20201110 |