CN111325764A - Fruit image contour recognition method - Google Patents

Fruit image contour recognition method Download PDF

Info

Publication number
CN111325764A
CN111325764A CN202010087130.9A CN202010087130A CN111325764A CN 111325764 A CN111325764 A CN 111325764A CN 202010087130 A CN202010087130 A CN 202010087130A CN 111325764 A CN111325764 A CN 111325764A
Authority
CN
China
Prior art keywords
fruit
contour
edge
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010087130.9A
Other languages
Chinese (zh)
Other versions
CN111325764B (en
Inventor
牟向伟
王梁
徐丹琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Normal University
Original Assignee
Guangxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Normal University filed Critical Guangxi Normal University
Priority to CN202010087130.9A priority Critical patent/CN111325764B/en
Publication of CN111325764A publication Critical patent/CN111325764A/en
Application granted granted Critical
Publication of CN111325764B publication Critical patent/CN111325764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fruit image contour identification method, which comprises the following steps: training based on a Mask R-CNN deep convolution neural network, inputting a fruit image training set into the Mask R-CNN deep convolution neural network, and training to obtain a target detection model; extracting an interested region of the fruit image verification set through the target detection model, and generating a target regression frame according to the interested region; performing multi-feature fusion analysis on the fruit image in the target regression frame to determine the edge contour position of the fruit; and carrying out contour fitting optimization processing on the fruit edge contour position to obtain an optimized fruit edge contour. The method can effectively reduce and reduce the influence of the complex background interference phenomenon of uneven illumination, partial shielding and similar background characteristics on fruit identification and contour fitting, and improve the robustness.

Description

Fruit image contour recognition method
Technical Field
The invention mainly relates to the technical field of image processing, in particular to a fruit image contour identification method.
Background
With the development of modern agriculture, cost reduction and skilled labor use reduction become great challenges for agriculture, and the advantages of using harvesting robots for high-strength and intensive fruit picking tasks are particularly remarkable. Although the development prospect of the harvesting robot is quite wide, the identification and positioning performance based on vision is the bottleneck for promoting the application of the harvesting robot. Due to the interference of the problem of 'uncontrollable environment' (uneven illumination, partial shielding, similar background characteristics and the like) in the actual production environment, the fruit identification accuracy is low, the time consumption is long, and the requirement of effective picking operation is difficult to meet.
Accurate identification and accurate fitting of the outline of the fruit image play a very important role in the obstacle avoidance and fruit picking processes of the harvesting robot. If the robot cannot effectively recognize the fruits and accurately fit the fruit contours, the robot manipulator may collide with obstacles, so that the robot and the fruit trees are damaged; or the fruit cannot be effectively grabbed, resulting in picking failure.
At present, some researches on accurate identification and contour accurate fitting of fruit images mainly focus on methods based on color analysis, and the defects that fruit identification only from color analysis leads to unsatisfactory identification effect and poor robustness are caused.
Disclosure of Invention
The invention aims to solve the technical problem of providing a fruit image contour identification method aiming at the defects of the prior art.
The technical scheme for solving the technical problems is as follows: a fruit image contour identification method comprises the following steps:
training based on a Mask R-CNN deep convolution neural network, inputting a fruit image training set into the Mask R-CNN deep convolution neural network, and training to obtain a target detection model;
extracting an interested region of the fruit image verification set through the target detection model, and generating a target regression frame according to the interested region;
performing multi-feature fusion analysis on the fruit image in the target regression frame to determine the edge contour position of the fruit;
and carrying out contour fitting optimization processing on the fruit edge contour position to obtain an optimized fruit edge contour.
Another technical solution of the present invention for solving the above technical problems is as follows: a fruit image contour recognition device comprising:
the training module is used for training based on a Mask R-CNN deep convolution neural network, inputting a fruit image training set into the Mask R-CNN deep convolution neural network, and training to obtain a target detection model;
the processing module is used for extracting an interested region of the fruit image verification set through the target detection model and generating a target regression frame according to the interested region;
performing multi-feature fusion analysis on the fruit image in the target regression frame to determine the edge contour position of the fruit;
and the optimization module is used for carrying out contour fitting optimization processing on the fruit edge contour position to obtain an optimized fruit edge contour.
Another technical solution of the present invention for solving the above technical problems is as follows: a fruit image contour recognition apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the fruit image contour recognition method as described above when executing the computer program.
Another technical solution of the present invention for solving the above technical problems is as follows: a computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the fruit image contour recognition method as described above.
The invention has the beneficial effects that: the method comprises the steps of training a Mask R-CNN deep convolution neural network to obtain a target detection model, obtaining an interested region through the target detection model, carrying out multi-feature fusion analysis on a fruit image according to a target regression frame generated by the interested region, determining the edge outline position of a fruit, and carrying out outline fitting optimization processing on the edge outline position of the fruit, so that the influence of complex background interference phenomena of uneven illumination, partial shielding and similar background features on fruit identification and outline fitting can be effectively reduced and reduced, and the robustness is improved.
Drawings
Fig. 1 is a schematic flow chart of a fruit image contour recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for training a neural network according to an embodiment of the present invention;
FIG. 3 is a schematic view of a process for finding a fruit contour according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of optimizing a fruit contour according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a schematic flow chart of a fruit image contour identification method according to an embodiment of the present invention.
As shown in fig. 1, a method for identifying a fruit image contour includes the following steps:
training based on a Mask R-CNN deep convolution neural network, inputting a fruit image training set into the Mask R-CNN deep convolution neural network, and training to obtain a target detection model;
extracting an interested region of the fruit image verification set through the target detection model, and generating a target regression frame according to the interested region;
performing multi-feature fusion analysis on the fruit image in the target regression frame to determine the edge contour position of the fruit;
and carrying out contour fitting optimization processing on the fruit edge contour position to obtain an optimized fruit edge contour.
In the embodiment, a Mask R-CNN deep convolution neural network is trained to obtain a target detection model, an interested region is obtained through the target detection model, multi-feature fusion analysis is carried out on a fruit image according to a target regression frame generated by the interested region, the position of the edge contour of a fruit is determined, contour fitting optimization processing is carried out on the position of the edge contour of the fruit, the influence of complex background interference phenomena of uneven illumination, partial shielding and similar background features on fruit identification and contour fitting can be effectively reduced, and robustness is improved.
The method also comprises a step of preprocessing images in the fruit image training set and the fruit image verification set, and mainly aims at solving the problems that the background color difference change of the environment in the original image is severe, and the image is divided unevenly due to illumination change, alternate overlapping of leaves and the like. The influence of uneven illumination on mature fruit identification in the image is reduced; and the method provides guarantee for effective color difference threshold segmentation in the follow-up process.
The process of training the Mask R-CNN deep convolutional neural network is described below, and as shown in fig. 2, the training includes three stages, namely, a pre-training stage, a migration learning stage, and a test model stage: firstly, in a pre-training stage, adopting a ResNet neural network to pre-train a pre-training set sample to obtain a mature fruit feature extractor; then, adding a Mask branch and a classifier branch in a migration learning stage, performing network model parameter training on the optimized training set sample, and performing multiple iterative training and migration learning adjustment to obtain an optimized model; and finally, in the model testing stage, the model is verified by using the verification set sample, and network parameters are further adjusted, so that a target detection model is generated.
The method also comprises the following process of optimizing the target detection model:
and S1, loading parameters of a pre-trained fruit target detection model.
And S2, modifying the configuration parameters and the classification parameters, setting the related parameter range according to a certain principle in order to obtain a faster and accurate training result, and searching for the optimal parameter setting.
And S3, training a basic network layer, setting different order network layers for the convolutional neural network with the acquired characteristics to extract the fruit characteristics in a pre-training stage, and preferably selecting an applicable basic network layer to extract the subsequent characteristics by judging and comparing and analyzing the convergence process of the loss function.
And S4, optimizing and training the network model, recording parameters such as the step length and the times of iteration of the adjusted and selected network model, the learning rate, the posiveIoU (confidence coefficient) and the like after optimizing the model each time, and observing and recording the convergence descending speed and the convergence degree of the model. And performing parameter adjustment and model optimization on the oil tea fruit target detection model by using the optimized training set sample, and acquiring the identified marked fruit target detection image. Evaluating the parameters of accuracy, omission factor and false detection rate of fruit target detection of the optimized training set.
And S5, repeating the step S4 until the model achieves an ideal result, and recording each optimal parameter value of the model.
The fruit image training set comprises 600 fruit image training pictures and 1200 fruit optimization image training pictures, and the fruit image verification set comprises 1200 fruit image verification pictures.
Optionally, as an embodiment of the present invention, the target detection model includes a backbone network, a regional suggestion network, and a three-branch structure;
the process of extracting the region of interest of the fruit image verification set through the target detection model and generating the target regression box according to the region of interest comprises the following steps:
performing feature extraction on the fruit image verification set by using the backbone network to obtain feature information, and performing residual propagation processing on the feature information to generate a feature map;
performing foreground and background processing on the feature map by using the region suggestion network to obtain an interested region, and performing regression processing on the interested region to generate a target regression frame;
and detecting the target regression frame by using the three-branch structure to obtain the category, the coordinate and the mask of the target regression frame.
In the embodiment, Mask R-CNN is used as a mature camellia oleifera fruit target detection network structure, an additional three-branch structure is added on the basis of FasterR-CNN to expand a target detection frame, and an area suggestion network is added to obtain an interested area, so that the obtained deep learning neural network for example segmentation is improved.
Optionally, as an embodiment of the present invention, the process of performing foreground and background processing on the feature map according to the regional suggestion network includes:
building a convolution layer, and performing convolution processing on the characteristic graph to obtain a plurality of anchor points;
and generating convolution kernels corresponding to the number of the anchor points according to the anchor points, judging the foreground and the background of the feature map through each convolution kernel, and obtaining the region of interest according to the foreground.
Optionally, as an embodiment of the present invention, the detecting the target regression frame according to the three-branch structure includes:
extracting the features of the target regression frame by a RoIAlign regional feature extraction method, and converting the extracted features into a specific value from dimensionality;
setting a full-link layer behind the convolutional layer, inputting each specific value to the full-link layer to share the weight of the region of interest, and finishing the regulation of the region of interest;
establishing a Cls & Reg path and a Mask path after the full link layer, wherein the Cls & Reg path comprises a Cls branch and a Reg branch, guiding the regulated region of interest into the Cls branch, generating a target regression frame and coordinates thereof through the Cls branch, and predicting the category and category possibility of the target regression frame through the Reg branch;
and leading the target regression frame into the Mask passage, and obtaining the Mask of the target regression frame through the Mask passage.
The area proposal network convolves feature maps of different scales, generating 3 anchor points (anchors) at each location, with 3 convolution kernels (fruit color class 3 and background) generated for class. And connecting two full-link layers after the convolutional layer to finish the discrimination of the foreground (target) and background (background) of each pixel and the regression correction of the fruit target frame.
Optionally, as an embodiment of the present invention, the process of performing multi-feature fusion analysis on the fruit image in the target regression box includes:
performing convolution smoothing on the fruit image verification set according to a PyMeanshift mean shift algorithm;
carrying out gray processing on the smoothed fruit image verification set according to a 2R-G-B color difference segmentation algorithm;
carrying out fruit edge overall contour detection on the grayed fruit image verification set and a target regression box of a fruit target detection model according to a Sobel operator, and carrying out image binarization processing on the detected fruit edge overall contour according to an adaptive threshold segmentation algorithm;
normalizing the whole contour of the edge of the fruit after binarization processing according to a distance transformation method to obtain a local maximum value of the edge;
performing segmentation adhesion object processing on the whole fruit edge contour according to a watershed transformation algorithm and the edge local maximum value to obtain a plurality of fruit edge contours;
and optimizing the plurality of fruit edge contours according to a filtering algorithm to determine the positions of the fruit edge contours.
In the embodiment, the difficulties of fruit image identification and contour fitting caused by the problems of complicated interference of leaves, interference of immature fruits, similar background characteristic shapes similar to circles, uneven color difference segmentation caused by uneven overlapped fruits and illumination and the like can be solved.
Optionally, as an embodiment of the present invention, the graying the smoothed fruit image verification set according to a 2R-G-B color difference segmentation algorithm includes:
carrying out graying processing on the smoothed fruit image verification set according to a first formula, wherein the first formula is as follows:
Figure RE-GDA0002489352170000071
wherein f (i, j) is the gray value of the color pixel at the coordinate (i, j), and R (i, j), G (i, j) and B (i, j) are the three-component pixel values of the color pixel at the coordinate (i, j), respectively.
And if and only if the R component is greater than the G component and the B component, calculating the gray value of the color pixel point P0 according to the improved 2R-G-B index method, otherwise, distributing the gray value of the color pixel point to be zero.
In the embodiment, compared with the classic 2R-G-B algorithm, the classic algorithm utilizes an exponential method to calculate, so that the algorithm has low processing efficiency and long time consumption; the improved 2R-G-B algorithm reduces the complexity of calculation, improves the efficiency of the algorithm, reduces the average processing time of the algorithm, and simultaneously, the improved 2R-G-B algorithm can more effectively reduce background noise and separate fruits from the background
Optionally, as an embodiment of the present invention, in the process of determining the candidate contour, a normalized fusion algorithm based on distance transformation and morphological operation is adopted:
firstly, preprocessing an acquired fruit image, carrying out background flattening by adopting PyMeanshift convolution smoothing processing, and removing noise points from the smoothed image; carrying out graying processing on the image by utilizing an improved 2R-G-B color difference segmentation algorithm; then, calculating and solving the gray level image based on the chamfering distance by adopting a distance transformation method, and forming a highlighted marking peak for each fruit target; meanwhile, adding one step of morphological operation, and filtering the small noise area still existing in the structural element by self-defining the structural element and utilizing expansion corrosion operation and opening and closing operation; then normalizing the result of the distance transformation so as to find a local maximum value; and finally, acquiring seeds by utilizing 'region growing' to generate a 'Mark' Mark. The method can effectively extract the fruit characteristic points contained in the image, simultaneously remove redundant edges and extract effective edges, so that the processing speed of the algorithm is improved, the processing time is reduced, the complexity of the algorithm is reduced, and the practicability is good.
Optionally, as an embodiment of the present invention, the process of performing overall outline detection on the fruit edge of the grayed fruit image verification set according to the Sobel operator includes:
the Sobel operator is:
Figure RE-GDA0002489352170000081
wherein the content of the first and second substances,
Figure RE-GDA0002489352170000082
Gxfor horizontal gradient, GyIs a vertical gradient.
In the above embodiment, the edge contour of the image is found for the grayscale image, the Sobel operator is used for convolution, the adaptive threshold is used for binarization edge solving, and the steps of gaussian smoothing, gradient solving, threshold adaptation, convolution filtering and the like are used for finding the edge of the fruit contour in the image.
Optionally, as an embodiment of the present invention, the processing, according to the watershed transform algorithm and the edge local maximum, of the fruit edge overall contour by using the segmented blocking object includes:
Figure RE-GDA0002489352170000091
wherein f (i, j) is the gray value of the color pixel point at the coordinate (i, j),
Figure RE-GDA0002489352170000092
and
Figure RE-GDA0002489352170000093
respectively solving partial differentiation in the horizontal direction and the vertical direction of the color pixel points of f (i, j);
Figure RE-GDA0002489352170000094
unit vectors on two coordinate axes respectively;
Figure RE-GDA0002489352170000095
the gradient vectors in the directions of two coordinate axes at the position of each color pixel point are respectively, and g (i, j) is the gradient vector at the position of each color pixel point.
As shown in fig. 3, a grayed fruit image verification set is input, a gradient image is calculated in a grayscale vector space, gaussian filtering is performed on the gradient image, morphological opening and closing operations, dilation and erosion operations are performed, that is, a 5 × 5 elliptical structural element template is used to perform gaussian filtering on the gradient image, then a 5 × 5 elliptical structural element template is used to perform morphological opening and closing operations, dilation and erosion operations on the gradient image, and finally processing is performed through a watershed transform algorithm.
In the embodiment, compared with the classical watershed transform algorithm, the classical watershed transform algorithm is particularly sensitive to noise, is easy to cause image gradient deterioration and the offset of a segmentation contour, and is easy to generate an over-segmentation phenomenon; the improved watershed transform algorithm is properly selected and improved in terms of the gradient image calculation method and the size of a filtering template, the problem that the watershed algorithm is easily subjected to over-segmentation caused by noise and the like is solved, meanwhile, the gradient image is filtered by combining with morphological calculation, complex background interferences such as 'similar leaf shape to circle', 'uneven leaf shielding' and the like can be effectively eliminated, and complete contour separation is carried out on the overlapped fruit image.
Optionally, as an embodiment of the present invention, the process of optimizing the plurality of fruit edge contours according to a filtering algorithm includes:
optimizing the plurality of fruit edge contours according to a second formula, wherein the second formula is as follows:
Figure RE-GDA0002489352170000101
wherein h (i, j) is the pixel point parameter of the initial contour object, h1(i, j) is the area-filtered contour object pixel point parameter, h2(i, j) is the contour object pixel point parameter after the width-height ratio is filtered, S, M is the ratio of the contour area corresponding to the initial contour to the width-height, D, T is the given threshold of the contour area to the width-height ratio.
As shown in fig. 4, the specific implementation process is as follows: acquiring a contour object pixel point parameter; calculating the outline area Si through an area function contourArea; when the contour area Si is smaller than or equal to the threshold value D, acquiring the pixel point parameter of the next contour object; when the outline area Si is larger than the threshold value D, acquiring the area width and height information through a function boundRece; calculating the aspect ratio Hi of the contour object; and when the aspect ratio Hi is larger than T, the T is a threshold value, the contour object is reserved, and otherwise, the pixel point parameter of the next contour object is obtained.
Calculating the outline area by using an area function by acquiring pixel point parameters of the outline object, setting a pixel point 100 as a threshold value D, and performing area filtering when the pixel point is smaller than the threshold value D; and simultaneously acquiring the area information of the contour object, calculating the ratio of the width to the height of the contour object, setting a threshold value T within the range of the ratio of 0.9-1.1, and only the contour with the width-height ratio within the threshold value T can be reserved.
In the embodiment, false positive contours can be effectively removed, and refined identification processing of fruit contours is realized.
Optionally, as an embodiment of the present invention, the process of performing contour fitting optimization processing on the fruit edge contour position includes: carrying out contour fitting optimization processing on the fruit edge contour according to a topological structure reduction algorithm, wherein the topological structure reduction algorithm is as follows:
the reduction algorithm according to the topological structure is as follows:
s1: solving the geometric moment of the image according to a third formula to obtain the centroid of each fruit edge contour object, wherein the third formula is as follows:
Figure RE-GDA0002489352170000111
wherein the content of the first and second substances,
Figure RE-GDA0002489352170000112
M00zero order distance, M, of image distance01,M10Is the second order distance, x, of the moment of the imagec,ycIs the centroid of the outline object, and V (i, j) is the pixel location of the outline object;
s2: acquiring the minimum circumscribed polygon of each fruit edge contour object according to a fourth formula, wherein the fourth formula is as follows:
Figure RE-GDA0002489352170000113
wherein v isi(x, y) is a set of reserved pixel points,
dmax=V(xA,yA)-V(xB,yB);,V(xA,yA)∪V(xB,yB) The pixel positions of the head and the tail of the fruit edge contour curve are V (x)A,yA)…V(xi,yi) Taking all points as a new pixel point set from an initial position A to a position i on a contour curve of the edge of the fruit, taking A and i as the pixel positions of a new head point and a new tail point respectively, and dmaxThe distance between the head line segment and the tail line segment is K, and K is a distance threshold parameter.
Specifically, a topological structure reduction algorithm is used for obtaining the centroid of each contour object as a contour fitting circle center by solving the geometric moment of the image; then, acquiring the minimum circumscribed polygon of each contour object by using an RDP (remote desktop protocol) -based algorithm; and finally, fitting and reducing the fruit contour in the original image according to the contour fitting circle center and the minimum circumscribed polygon area.
According to the embodiment, the precise fitting reduction of the fruit contour can be effectively realized, information is provided for the subsequent grabbing state of the mechanical arm, and the picking success rate is greatly improved.
Optionally, as an embodiment of the present invention, a fruit image contour recognition apparatus includes:
the training module is used for training based on a Mask R-CNN deep convolution neural network, inputting a fruit image training set into the Mask R-CNN deep convolution neural network, and training to obtain a target detection model;
the processing module is used for extracting an interested region of the fruit image verification set through the target detection model and generating a target regression frame according to the interested region;
performing multi-feature fusion analysis on the fruit image in the target regression frame to determine the edge contour position of the fruit;
and the optimization module is used for carrying out contour fitting optimization processing on the fruit edge contour position to obtain an optimized fruit edge contour.
Optionally, as an embodiment of the present invention, a fruit image contour recognition apparatus includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the fruit image contour recognition method as described above when executing the computer program.
Another technical solution of the present invention for solving the above technical problems is as follows: a computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the fruit image contour recognition method as described above.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A fruit image contour identification method is characterized by comprising the following steps:
training based on a Mask R-CNN deep convolution neural network, inputting a fruit image training set into the Mask R-CNN deep convolution neural network, and training to obtain a target detection model;
extracting an interested region of the fruit image verification set through the target detection model, and generating a target regression frame according to the interested region;
performing multi-feature fusion analysis on the fruit image in the target regression frame to determine the edge contour position of the fruit;
and carrying out contour fitting optimization processing on the fruit edge contour position to obtain an optimized fruit edge contour.
2. The identification method according to claim 1, wherein the target detection model comprises a backbone network, a regional recommendation network and a three-branch structure;
the process of extracting the region of interest of the fruit image verification set through the target detection model and generating the target regression box according to the region of interest comprises the following steps:
performing feature extraction on the fruit image verification set by using the backbone network to obtain feature information, and performing residual propagation processing on the feature information to generate a feature map;
performing foreground and background processing on the feature map by using the region suggestion network to obtain an interested region, and performing regression processing on the interested region to generate a target regression frame;
and detecting the target regression frame by using the three-branch structure to obtain the category, the coordinate and the mask of the target regression frame.
3. The identification method according to claim 2, wherein the process of foreground and background processing the feature map according to the area suggestion network comprises:
building a convolution layer, and performing convolution processing on the characteristic graph to obtain a plurality of anchor points;
and generating convolution kernels corresponding to the number of the anchor points according to the anchor points, judging the foreground and the background of the feature map through each convolution kernel, and obtaining the region of interest according to the foreground.
4. The identification method according to claim 2, wherein the detecting the target regression box according to the three-branch structure comprises:
extracting the features of the target regression frame by a RoIAlign regional feature extraction method, and converting the extracted features into a specific value from dimensionality;
setting a full-link layer behind the convolutional layer, inputting each specific value to the full-link layer to share the weight of the region of interest, and finishing the regulation of the region of interest;
establishing a Cls & Reg path and a Mask path after the full link layer, wherein the Cls & Reg path comprises a Cls branch and a Reg branch, guiding the regulated region of interest into the Cls branch, generating a target regression frame and coordinates thereof through the Cls branch, and predicting the category of the target regression frame through the Reg branch;
and leading the target regression frame into the Mask passage, and obtaining the Mask of the target regression frame through the Mask passage.
5. The identification method according to claim 2, wherein the process of performing multi-feature fusion analysis on the fruit image in the target regression box comprises:
performing convolution smoothing on the fruit image verification set according to a PyMeanshift mean shift algorithm;
carrying out gray processing on the smoothed fruit image verification set according to a 2R-G-B color difference segmentation algorithm;
carrying out fruit edge overall contour detection on the grayed fruit image verification set and a target regression box of a fruit target detection model according to a Sobel operator, and carrying out image binarization processing on the detected fruit edge overall contour according to an adaptive threshold segmentation algorithm;
normalizing the whole contour of the edge of the fruit after binarization processing according to a distance transformation method to obtain a local maximum value of the edge;
performing segmentation adhesion object processing on the whole fruit edge contour according to a watershed transformation algorithm and the edge local maximum value to obtain a plurality of fruit edge contours;
and optimizing the plurality of fruit edge contours according to a filtering algorithm to determine the positions of the fruit edge contours.
6. The identification method according to claim 5, wherein the graying the smoothed fruit image verification set according to the 2R-G-B color difference segmentation algorithm comprises:
carrying out graying processing on the smoothed fruit image verification set according to a first formula, wherein the first formula is as follows:
Figure FDA0002382453520000031
wherein f (i, j) is the gray value of the color pixel at the coordinate (i, j), and R (i, j), G (i, j) and B (i, j) are the three-component pixel values of the color pixel at the coordinate (i, j), respectively.
7. The identification method according to claim 5, wherein the step of performing fruit edge global contour detection on the grayed fruit image verification set according to Sobel operator comprises:
the Sobel operator is:
Figure FDA0002382453520000032
wherein the content of the first and second substances,
Figure FDA0002382453520000033
Gxfor horizontal gradient, GyIs a vertical gradient.
8. The method for identifying according to claim 5, wherein the step of processing the fruit edge global contour by using the watershed transform algorithm and the edge local maximum value as a segmentation blocking object comprises:
Figure FDA0002382453520000034
wherein f (i, j) is the gray value of the color pixel point at the coordinate (i, j),
Figure FDA0002382453520000035
and
Figure FDA0002382453520000036
respectively solving partial differentiation in the horizontal direction and the vertical direction of the color pixel points of f (i, j);
Figure FDA0002382453520000037
unit vectors on two coordinate axes respectively;
Figure FDA0002382453520000041
the gradient vectors in the directions of two coordinate axes at the position of each color pixel point are respectively, and g (i, j) is the gradient vector at the position of each color pixel point.
9. The identification method according to claim 5, wherein said process of optimizing said plurality of fruit edge contours according to a filtering algorithm comprises:
optimizing the plurality of fruit edge contours according to a second formula, wherein the second formula is as follows:
Figure FDA0002382453520000042
wherein h (i, j) is the pixel point parameter of the initial contour object, h1(i, j) is the area-filtered contour object pixel point parameter, h2(i, j) is the contour object pixel point parameter after the width-height ratio is filtered, S, M is the ratio of the contour area corresponding to the initial contour to the width-height, D, T is the given threshold of the contour area to the width-height ratio.
10. The identification method according to claim 5, wherein the process of performing contour fitting optimization processing on the fruit edge contour position comprises: carrying out contour fitting optimization processing on the fruit edge contour according to a topological structure reduction algorithm, wherein the topological structure reduction algorithm is as follows:
s1: solving the geometric moment of the image according to a third formula to obtain the centroid of each fruit edge contour object, wherein the third formula is as follows:
Figure FDA0002382453520000043
Figure FDA0002382453520000044
Figure FDA0002382453520000045
wherein the content of the first and second substances,
Figure FDA0002382453520000046
M00zero order distance, M, of image distance01,M10Is the second order distance, x, of the moment of the imagec,ycIs the centroid of the outline object, and V (i, j) is the pixel location of the outline object;
s2: acquiring the minimum circumscribed polygon of each fruit edge contour object according to a fourth formula, wherein the fourth formula is as follows:
Figure FDA0002382453520000051
wherein v isi(x, y) is the set of pixel points that remain, dmax=V(xA,yA)-V(xB,yB);,V(xA,yA)∪V(xB,yB) The pixel positions of the head and the tail of the fruit edge contour curve are V (x)A,yA)…V(xi,yi) From the starting position A to the position i on the contour curve of the fruit edge, the fruit edge contour curve is divided into two partsTaking a point as a new pixel point set, taking A and i as new head and tail two point pixel positions respectively, and dmaxThe distance between the head line segment and the tail line segment is K, and K is a distance threshold parameter.
CN202010087130.9A 2020-02-11 2020-02-11 Fruit image contour recognition method Active CN111325764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010087130.9A CN111325764B (en) 2020-02-11 2020-02-11 Fruit image contour recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010087130.9A CN111325764B (en) 2020-02-11 2020-02-11 Fruit image contour recognition method

Publications (2)

Publication Number Publication Date
CN111325764A true CN111325764A (en) 2020-06-23
CN111325764B CN111325764B (en) 2022-05-31

Family

ID=71172618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010087130.9A Active CN111325764B (en) 2020-02-11 2020-02-11 Fruit image contour recognition method

Country Status (1)

Country Link
CN (1) CN111325764B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112052839A (en) * 2020-10-10 2020-12-08 腾讯科技(深圳)有限公司 Image data processing method, apparatus, device and medium
CN112149727A (en) * 2020-09-22 2020-12-29 佛山科学技术学院 Green pepper image detection method based on Mask R-CNN
CN112508975A (en) * 2020-12-21 2021-03-16 上海眼控科技股份有限公司 Image identification method, device, equipment and storage medium
CN112785571A (en) * 2021-01-20 2021-05-11 浙江理工大学 Famous tea tender leaf recognition and segmentation method based on improved watershed
CN112926551A (en) * 2021-04-21 2021-06-08 北京京东乾石科技有限公司 Target detection method, target detection device, electronic equipment and storage medium
CN113012220A (en) * 2021-02-02 2021-06-22 深圳市识农智能科技有限公司 Fruit counting method and device and electronic equipment
CN113129306A (en) * 2021-05-10 2021-07-16 电子科技大学成都学院 Occlusion object segmentation solving method based on deep learning
CN113177947A (en) * 2021-04-06 2021-07-27 广东省科学院智能制造研究所 Complex environment target segmentation method and device based on multi-module convolutional neural network
CN113255434A (en) * 2021-04-08 2021-08-13 淮阴工学院 Apple identification method fusing fruit features and deep convolutional neural network
CN113469199A (en) * 2021-07-15 2021-10-01 中国人民解放军国防科技大学 Rapid and efficient image edge detection method based on deep learning
CN114511850A (en) * 2021-12-30 2022-05-17 广西慧云信息技术有限公司 Method for identifying image of fruit size and granule of sunshine rose grape
CN114902872A (en) * 2022-04-26 2022-08-16 华南理工大学 Visual guidance method for picking fruits by robot
WO2022247628A1 (en) * 2021-05-24 2022-12-01 华为技术有限公司 Data annotation method and related product
CN115953593A (en) * 2023-01-10 2023-04-11 广州市易鸿智能装备有限公司 Method, device and equipment for recognizing contour of industrial part and computer storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018002841A1 (en) * 2016-06-29 2018-01-04 Ser.Mac S.R.L. An apparatus for detecting damaged fruit and vegetable products
US20180012072A1 (en) * 2016-07-09 2018-01-11 Grabango Co. Computer vision for ambient data acquisition
CN109345527A (en) * 2018-09-28 2019-02-15 广西师范大学 A kind of tumor of bladder detection method based on MaskRcnn
CN110152938A (en) * 2019-04-02 2019-08-23 华中科技大学 A kind of component dispensing track extraction method and automatically control machine people system
CN110348445A (en) * 2019-06-06 2019-10-18 华中科技大学 A kind of example dividing method merging empty convolution sum marginal information
CN110619750A (en) * 2019-08-15 2019-12-27 重庆特斯联智慧科技股份有限公司 Intelligent aerial photography identification method and system for illegal parking vehicle
CN110619632A (en) * 2019-09-18 2019-12-27 华南农业大学 Mango example confrontation segmentation method based on Mask R-CNN

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018002841A1 (en) * 2016-06-29 2018-01-04 Ser.Mac S.R.L. An apparatus for detecting damaged fruit and vegetable products
US20180012072A1 (en) * 2016-07-09 2018-01-11 Grabango Co. Computer vision for ambient data acquisition
CN109345527A (en) * 2018-09-28 2019-02-15 广西师范大学 A kind of tumor of bladder detection method based on MaskRcnn
CN110152938A (en) * 2019-04-02 2019-08-23 华中科技大学 A kind of component dispensing track extraction method and automatically control machine people system
CN110348445A (en) * 2019-06-06 2019-10-18 华中科技大学 A kind of example dividing method merging empty convolution sum marginal information
CN110619750A (en) * 2019-08-15 2019-12-27 重庆特斯联智慧科技股份有限公司 Intelligent aerial photography identification method and system for illegal parking vehicle
CN110619632A (en) * 2019-09-18 2019-12-27 华南农业大学 Mango example confrontation segmentation method based on Mask R-CNN

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DANQI XU 等: "3D Reconstruction of Camellia oleifera Fruit Recognition and Fruit Branch Based on Kinect Camera", 《ICAIIS 2021: 2021 2ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND INFORMATION SYSTEMS》 *
KAIMING HE 等: "Mask R-CNN", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
杨长辉 等: "基于Mask R-CNN的复杂背景下柑橘树枝干识别与重建", 《农业机械学报》 *
赵兵: "基于深度学习的葡萄叶片分割", <中国优秀博硕士学位论文全文数据库(硕士)农业科技辑> *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149727A (en) * 2020-09-22 2020-12-29 佛山科学技术学院 Green pepper image detection method based on Mask R-CNN
CN112052839A (en) * 2020-10-10 2020-12-08 腾讯科技(深圳)有限公司 Image data processing method, apparatus, device and medium
CN112508975A (en) * 2020-12-21 2021-03-16 上海眼控科技股份有限公司 Image identification method, device, equipment and storage medium
CN112785571A (en) * 2021-01-20 2021-05-11 浙江理工大学 Famous tea tender leaf recognition and segmentation method based on improved watershed
CN112785571B (en) * 2021-01-20 2024-04-12 浙江理工大学 Famous tea tender leaf identification and segmentation method based on improved watershed
CN113012220A (en) * 2021-02-02 2021-06-22 深圳市识农智能科技有限公司 Fruit counting method and device and electronic equipment
CN113177947B (en) * 2021-04-06 2024-04-26 广东省科学院智能制造研究所 Multi-module convolutional neural network-based complex environment target segmentation method and device
CN113177947A (en) * 2021-04-06 2021-07-27 广东省科学院智能制造研究所 Complex environment target segmentation method and device based on multi-module convolutional neural network
CN113255434B (en) * 2021-04-08 2023-12-19 淮阴工学院 Apple identification method integrating fruit characteristics and deep convolutional neural network
CN113255434A (en) * 2021-04-08 2021-08-13 淮阴工学院 Apple identification method fusing fruit features and deep convolutional neural network
CN112926551A (en) * 2021-04-21 2021-06-08 北京京东乾石科技有限公司 Target detection method, target detection device, electronic equipment and storage medium
CN113129306A (en) * 2021-05-10 2021-07-16 电子科技大学成都学院 Occlusion object segmentation solving method based on deep learning
WO2022247628A1 (en) * 2021-05-24 2022-12-01 华为技术有限公司 Data annotation method and related product
CN113469199A (en) * 2021-07-15 2021-10-01 中国人民解放军国防科技大学 Rapid and efficient image edge detection method based on deep learning
CN114511850A (en) * 2021-12-30 2022-05-17 广西慧云信息技术有限公司 Method for identifying image of fruit size and granule of sunshine rose grape
CN114511850B (en) * 2021-12-30 2024-05-14 广西慧云信息技术有限公司 Method for identifying size particle image of sunlight rose grape fruit
CN114902872A (en) * 2022-04-26 2022-08-16 华南理工大学 Visual guidance method for picking fruits by robot
CN115953593A (en) * 2023-01-10 2023-04-11 广州市易鸿智能装备有限公司 Method, device and equipment for recognizing contour of industrial part and computer storage medium
CN115953593B (en) * 2023-01-10 2023-11-21 广州市易鸿智能装备有限公司 Contour recognition method, apparatus, device and computer storage medium for industrial parts

Also Published As

Publication number Publication date
CN111325764B (en) 2022-05-31

Similar Documents

Publication Publication Date Title
CN111325764B (en) Fruit image contour recognition method
CN115601374B (en) Chromosome image segmentation method
CN108053419B (en) Multi-scale target tracking method based on background suppression and foreground anti-interference
CN110837768B (en) Online detection and identification method for rare animal protection
CN107230202B (en) Automatic identification method and system for road surface disease image
WO2018072233A1 (en) Method and system for vehicle tag detection and recognition based on selective search algorithm
Khan et al. An efficient contour based fine-grained algorithm for multi category object detection
US9971929B2 (en) Fingerprint classification system and method using regular expression machines
CN105931255A (en) Method for locating target in image based on obviousness and deep convolutional neural network
CN107610114A (en) Optical satellite remote sensing image cloud snow mist detection method based on SVMs
CN109242032B (en) Target detection method based on deep learning
CN110070562A (en) A kind of context-sensitive depth targets tracking
CN113327272A (en) Robustness long-time tracking method based on correlation filtering
CN117576079A (en) Industrial product surface abnormality detection method, device and system
CN116381672A (en) X-band multi-expansion target self-adaptive tracking method based on twin network radar
CN116071339A (en) Product defect identification method based on improved whale algorithm optimization SVM
CN109359653A (en) A kind of cotton leaf portion adhesion scab image partition method and system
CN110458019B (en) Water surface target detection method for eliminating reflection interference under scarce cognitive sample condition
CN117474029B (en) AI polarization enhancement chart code wave frequency acquisition imaging identification method based on block chain
CN112053385B (en) Remote sensing video shielding target tracking method based on deep reinforcement learning
CN116311387B (en) Cross-modal pedestrian re-identification method based on feature intersection
CN116385953B (en) Railway wagon door hinge breaking fault image identification method
CN109063749B (en) Robust convolution kernel number adaptation method based on angular point radiation domain
CN116665097A (en) Self-adaptive target tracking method combining context awareness
CN111553217A (en) Driver call monitoring method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant