CN110443811A - A kind of full-automatic partition method of complex background leaf image - Google Patents

A kind of full-automatic partition method of complex background leaf image Download PDF

Info

Publication number
CN110443811A
CN110443811A CN201910683687.6A CN201910683687A CN110443811A CN 110443811 A CN110443811 A CN 110443811A CN 201910683687 A CN201910683687 A CN 201910683687A CN 110443811 A CN110443811 A CN 110443811A
Authority
CN
China
Prior art keywords
image
foregroundmask
region
point
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910683687.6A
Other languages
Chinese (zh)
Other versions
CN110443811B (en
Inventor
高理文
林小桦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University of Traditional Chinese Medicine
Guangzhou University of Chinese Medicine
Original Assignee
Guangzhou University of Traditional Chinese Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University of Traditional Chinese Medicine filed Critical Guangzhou University of Traditional Chinese Medicine
Priority to CN201910683687.6A priority Critical patent/CN110443811B/en
Publication of CN110443811A publication Critical patent/CN110443811A/en
Application granted granted Critical
Publication of CN110443811B publication Critical patent/CN110443811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Abstract

The invention discloses a kind of full-automatic partition methods of complex background leaf image, gradient is directly sought on color image, then it catches leaf area to contain this feature of vein, by the way that vein is effectively enhanced and extracted, obtains more accurately prospect label figure.Finally using label dividing ridge method, the segmentation of image is realized.It can be said that this method, substantially overcomes the problem of complex background leaf image full-automatic dividing, the development for having pushed complex background leaf image to identify.Although this method is designed for blade, for the full-automatic image segmentation problem of the textured train of thought of single goal under complex background and target area, also there is certain reference significance.

Description

A kind of full-automatic partition method of complex background leaf image
Technical field
The present invention relates to field of image processings, more particularly, to a kind of full-automatic dividing of complex background leaf image Method.
Background technique
Medicinal plant is the main source of Chinese medicine, is the material base that traditional Chinese medicine cures the sickness to save the patient.However, in recent years, by In the deterioration of ecological environment, the significant atrophy of resources of medicinal plant.Reinforce extremely urgent to the protection of medicinal plant.
If the geographical distribution of Endangered Medicinal Herb can in depth be found out, geographic information resources library is constructed, for wild medicine It with the protection of plant, introduces a fine variety and utilizes, it will play important supporting role.However, since common people are difficult in complexity The classification of differential plant in the ecological environment of field, so, existing resource investigation can only also take the mode of sampling reconnaissance, from There are also certain distances for deep investigation comprehensively.It nonetheless, has been to consume huge manpower and material resources.
Leaf image is shot, then the imagination of progress machine identification, allowed more only with preliminary basic personnel The type of current plant can be relatively accurately identified by simple mobile phone operation in field.With most problem of image recognition Equally, the segmentation of leaf image is first of difficulty.A kind of accurate segmentation of existing complex background Leaves of Chinese Medicinal Plants image Method, this method need artificial participation;
The research that plant machine based on leaf image identifies, there are many reports.In the way of its Image Acquisition, it can divide It is two kinds, first is that blade is taken off, then shoots or scan, obtains simple background leaf image.Its advantage is that image is easy to divide It cuts, segmentation accuracy rate is high;The disadvantage is that causing to damage to plant.Existing research mostly uses this mode.Second is that directly shooting limb On blade, obtain complex background leaf image.Advantage is not cause any damage to plant;The disadvantage is that in addition to target in image Outside blade, also containing background objects such as limb, soil, other blades, segmentation difficulty is big, and segmentation accuracy rate is low, seriously affects The accuracy of subsequent classification identification.
Due to lossless sexual clorminance, studying the machine identification based on complex background leaf image is emphasis.So, complex background The accurate segmentation of leaf image is exactly primarily to solve the problems, such as.Although current deep learning classification method, may be implemented image To the mapping of classification.But leaf image sample is difficult balancedly to extend to larger amt.Fraction type is common, greatly Partial Species difficulty is looked for, some also something can only be found by accident, and not through seeking.In this case, the applicability of deep learning classification method is still to be tested.And And if image segmentation can be carried out accurately, image background, that is, distracter are rejected, which kind of classification side used regardless of subsequent Method is also all of great benefit to.
Image partition method has very much.Some need artificial participate in.We carry out this kind of methods previous stage The research of depth, and propose a kind of complex background leaf image dividing method of high-accuracy for having and manually participating in.But it needs The artificial experience for participating in still affecting user.Obviously, full automatic method is more favourable.
Existing full automatic image partition method also there are many, famous has:
OTSU is the Threshold segmentation side that binaryzation is carried out by a kind of pair of image that Japanese scholars OTSU was proposed in 1979 Method.This method selects optimal segmentation threshold using maximum between-cluster variance as criterion.This method is very quick to noise and target sizes Sense only generates preferable segmentation effect to the apparent image of foreground and background contrast.
Mean shift is the hill-climbing algorithm based on Density Estimator, can be used for clustering, image segmentation, tracking etc..Image Segmentation is exactly the class label for seeking each pixel.Class label depends on its cluster belonging to feature space.For each A cluster must have a class center first.Mean shift is considered probability density (probalility density Function maximum point) is exactly class center.Mean shift is similar with OTSU, apparent for foreground and background contrast Image, segmentation effect are fine;Otherwise, effect is poor.
GraphCut method is associated with minimal cut (min cut) problem of figure image segmentation problem.Based on graph theory The essence of dividing method is exactly to go to remove specific side as target to minimize Cost, and figure is divided into several subgraphs to realize Segmentation.Inclusion region item and border item in Cost.It handles pixel value there are the image of notable difference is advantageous, but carries on the back to complexity Scape image, the image that foreground and background is closer to, effect are poor.
In recent years, deep learning rapidly develops.Deep learning image partition method FCN (Fully Convolutional Networks it) also comes into being.And superior performance is shown in the image segmentation problem of various complexity repeatly.FCN by with " as Element-pixel ", " end-end " mode train (trained end-to-end, pixels-to-pixels on semantic Segmentation), the classification to the Pixel-level of image is realized, to solve the image segmentation of semantic level (semantic segmentation) problem.If not considering the limitation of time and memory, theoretically, it can receive arbitrarily The input picture of size.But FCN also has shortcoming: the result 1. obtained is still not fine enough, unwise to the details in image Sense.2. only classifying to each pixel, the relationship between pixel and pixel is not fully considered.
Another classical dividing method --- label watershed segmentation.It is not limited by target area shape, very It is suitble to the segmentation of different plurality of classes leaf image.Also, it is also very applicable for the situation of complex background.As long as defeated The prospect tag image and context marker image entered is accurate, it tends to obtain good segmentation result.But how accurately to obtain Prospect tag image and context marker image are obtained, is a great problem.Many times, the mode manually participated in can only be taken.
Summary of the invention
The present invention provides a kind of full-automatic partition method of complex background leaf image, realizes the medicinal plant in complex background The full-automatic dividing of object blade.
In order to solve the above technical problems, technical scheme is as follows:
A kind of full-automatic partition method of complex background leaf image, comprising the following steps:
S1: preposition simple division is carried out using maximum variance between clusters to original leaf image;
The original leaf image: being respectively converted into the representation of HIS, Lab model, obtains each component image by S2, Context marker is detected in the maximum variance between clusters segmentation result of each component image again, and according to preset standard, selection One of context marker is as optimal context marker;
S3: prospect label is detected on original leaf image;
S4: arranging prospect label and context marker, is split using label watershed segmentation methods, is finally divided Image.
Preferably, step S1 specifically includes the following steps:
S1.1: being contracted to preset size for the original leaf image of colored RGB, obtain image currentImage, tool Body are as follows:
Calculate ratio=(1200 × 900)/(line number of original leaf image × original leaf image columns);
If ratio is reduced less than 1, image by multiple of the value of the extraction of square root of ratio, if ratio is not less than 1, nothing It needs to reduce;Obtain image currentImage;In practice, if desired it then can adjust the calculating ginseng of ratio using other scales Number;
S1.2: being respectively converted into image currentImage the representation of HIS, Lab model, judges in H component map Foreground color and background color whether there is excessive difference in picture, S component image, a component image, b component image, if so, Corresponding winning coefficient coef and judgement symbol flat is recorded, segmented image BW is exported;Specifically:
S1.2.1: to each component image, divided using maximum variance between clusters (OTSU), obtain corresponding segmented image BW;
S1.2.2: in the pixel for calculating each tetra- frame of segmented image BW, it is worth the pixel proportion frameCof for " 1 ";
S1.2.3: the quantity area for the pixel that each segmented image BW intermediate value is " 1 " is calculated;
S1.2.4: in each segmented image BW, all small area regions, the maximum sole zone of Retention area are deleted;
S1.2.5: the image BW intermediate value after calculating step S1.2.4 is the quantity targetArea of the pixel of " 1 ";
S1.2.6: using the ratio of targetArea and area as foregroundCoef;
S1.2.7: if frameCof<0.1 and foregroundCoef>0.9, then it is assumed that be that foreground and background color exists Excessive difference, setting flat are 1, and using (1-frameCof) * foregroundCoef as winning coefficient coef;If FrameCof >=0.1 or foregroundCoef≤0.9 negate the image BW after S1.2.4, reform S1.2.2 to S1.2.6 Step, then judge whether frameCof<0.1 and foregroundCoef>0.9, if so, setting flat is 1, with (1- FrameCof) for * foregroundCoef as winning coefficient coef, it is 0 that flat, which is otherwise arranged,;
S1.3: if the judgement symbol flat that each component image obtains in step S1.2 is 0, S2 is entered step; If there is more than one judgement symbol flat, take the corresponding segmented image BW of winning coefficient coef the maximum as image logicImage;
S1.4: the case where foreground area of detection image logicImage is stretched over four frames is at how many, if more than 3 Place, enters step S2;If being no more than at 3, S1.5 is entered step;
In general, it when shooting blade, complete as far as possible can shoot, the unlikely feelings for blade occur and touching frame Condition.Petiole is too long once in a while or blade tip is more elongated, has been stretched over frame, that has been the receiving under very big tolerance.Thus, such as Fruit occurs touching frame being more than 3 times, this logicImage certainly cannot be used as segmentation result.
S1.5: at the middle part of image logicImage, cutting 0.6 times that a block length is image logicImage length, Width also accounts for the subgraph logicImageCrop of same ratio, four frames of subgraph logicImageCrop respectively with image Four frames of logicImage are parallel, and the central point of the two is overlapped;Calculate the picture that subgraph logicImageCrop intermediate value is " 1 " The number logicImageCropArea of vegetarian refreshments;Calculate the number for the pixel that logicImage intermediate value is " 1 " LogicImageArea calculates the ratio of logicImageCropArea and logicImageArea, if being not more than 0.5, enters Step S2;If more than 0.5, S1.6 is entered step;
The purpose of this step is in detection logicImage, and whether foreground area is too small and is located near framing mask, if Belong to such case, then abandons this segmentation result.Because when shooting leaf image, often blade can all be shot in the picture Centre, and most of range of image can be occupied, without aforementioned situation.
S1.6: closed operation is carried out to image logicImage;
S1.7: holes filling is carried out to the image logicImage after step S1.6, obtains segmentation result.
Preferably, step S2 specifically includes the following steps:
S2.1: credible prospect scale parameter para is set;
S2.2: respectively in H component image, S component image, a component image, each corresponding segmentation figure of b component image As detecting context marker, specifically including on BW:
S2.2.1: at the middle part of segmented image BW, para × 2 times that a block length is component image length, width are cut Also account for the subgraph credibleForeground of same ratio, four frames of the subgraph credibleForeground respectively with Four frames of segmented image BW are parallel, the central point of subgraph credibleForeground and the central point weight of segmented image BW It closes;This is because often blade can all be shot and be entreated in the picture, and most of model of image can be occupied when shooting leaf image It encloses;Thus, the larger small region that may cover image center of target blade.
S2.2.2: it calculates in subgraph credibleForeground, is worth the pixel proportion for " 1 " coefOfCredibleForeground;
S2.2.3: if coefOfCredibleForeground in section [0.2,0.8], then point of segmented image BW It cuts accurate sexual incompatibility and detects context marker in segmented image BW, terminate detection process, return to detection failed message;
S2.2.4: if coefOfCredibleForeground saves as image less than 0.2, segmented image BW The segmented image BW result negated is stored as image BW by backgroundCandidate;If CoefOfCredibleForeground is not less than 0.2, and the segmented image BW result negated is stored as image backgroundCandidate;
S2.2.5: in the pixel for calculating tetra- frame of image BW, it is worth the pixel proportion frameCof for " 1 ";
S2.2.6: if frameCof is greater than 0.6, then it is assumed that the segmentation accuracy of this image BW leaves a question open, and terminates this detection Process returns to detection failed message;
S2.2.7: the etching operation of mathematical morphology is carried out to image backgroundCandidate;
S2.2.8: the pixel of four frames of the image backgroundCandidate after S2.2.7 is set to " 1 ";
S2.2.9: on the image backgroundCandidate after S2.2.8, selection retain with the image upper left corner that Point connection region, other regions delete, be as a result denoted as image background, it is required that context marker;But the inside can It can there is also holes.Hole is eliminated by following three steps, is conducive to the time loss for saving label watershed segmentation.
S2.2.10: negating image background, saves as image reverseBackground;
S2.2.11: on image reverseBackground, selection retains the region being connected to image center, other Region is deleted, and image reverseBackground2 is as a result denoted as;
S2.2.12: the image reverseBackground2 result negated is saved as image background;
S2.2.13: at the middle part of image reverseBackground2, cutting a block length is component image length (para × 2) times, width also account for the subgraph credibleForegroundClean of same ratio, four frames of subgraph respectively with Four frames of original image are parallel, and the central point of subgraph is overlapped with the central point of original image;
S2.2.14: it calculates in subgraph credibleForegroundClean, is worth the pixel proportion for " 1 " coefOfCredibleForegroundClean;
S2.2.15: calculating backgroundCoef=1- (1.01-coefOfCredibleForegroundClean) × FrameCof returns to image background, backgroundCoef and detection success message;
S2.3: it in four context marker testing results that S2.2 is obtained, selects and detects successfully and backgroundCoef The image background of the maximum is registered as image bestBackgroundDetect BestBackgroundDetectFlat is very, if four context marker testing results all return to failure information, to register BestBackgroundDetectFlat is false;
S2.4: if bestBackgroundDetectFlat be it is true, to image bestBackgroundDetect into Row amendment, detailed process are as follows: first image bestBackgroundDetect is corroded, then the picture on its four frames Vegetarian refreshments is set to " 1 " entirely, and finally selection retains the region being connected to its upper left corner that point, deletes remaining region.
Preferably, step S3 the following steps are included:
S3.1: Initialize installation obtains initialization prospect label figure foregroundMask and initial background label figure backgroundMask;
S3.2: gradient is directly asked to color RGB image currentImage;
S3.3: vein enhancing;
S3.4: vein segmentation;
S3.5: main lobe arteries and veins is incorporated to;
S3.6: scrappy veinlet is incorporated to.
Preferably, step S3.1 the following steps are included:
S3.1.1: setting nearest neighbor distance criterion coefficient nearDistancePara is arranged nearest neighbor distance criterion NearDistance is the long and wide average value of image multiplied by nearDistancePara, and round;Setting close to Frame is arranged apart from criterion coefficient nearBoundaryPara close to frame line number criterion nearBoundaryRowDistance For image total line number multiplied by nearBoundaryPara, and round, setting is close to frame columns criterion NearBoundaryColDistance is total columns of image multiplied by nearBoundaryPara, and round;Setting Very close to frame apart from criterion coefficient veryNearBoundaryPara, it is arranged very close to frame line number criterion VeryNearBoundaryRowDistance is total line number of image multiplied by veryNearBoundaryPara, and is rounded up Be rounded, be arranged very close to frame columns criterion veryNearBoundaryColDistance be image total columns multiplied by VeryNearBoundaryPara, and round;Setting evacuation frame distance coefficient dodgeBoundaryPara (example Such as 0.2, its value has to be larger than nearBoundaryPara), setting evacuation frame line number DodgeBoundaryRowDistance is total line number of image multiplied by dodgeBoundaryPara, and round, if Setting evacuation frame columns dodgeBoundaryColDistance is total columns of image multiplied by dodgeBoundaryPara, and Round;
S3.1.2: initialization prospect label figure foregroundMask, specifically: a newly-built size and colour RGB scheme Picture currentImage is consistent, and pixel value is all the bianry image foregroundMask of " 0 ", in foregroundMask Minister's degree is (para × 2) times of the length of foregroundMask, and width also accounts for the pixel in the rectangular area of same ratio Value is reset to " 1 ", and four frames of the rectangle are parallel with four frames of foregroundMask respectively, and the central point of the two also weighs It closes;
S3.1.3: initial background label figure backgroundMask, specifically: a newly-built size and colour RGB scheme Picture currentImage is consistent, and pixel value is all the bianry image backgroundMask of " 0 ", tetra- side backgroundMask Pixel value on frame resets to " 1 ".
Preferably, step S3.2 the following steps are included:
Gradient is directly asked to color RGB image currentImage, obtains gradient amplitude image VG and gradient angular image A, detailed process are:
S3.2.1: seek the partial derivative in the direction x and y: the coordinate for enabling any point on image currentImage is (x, y), Pixel value is (R, G, B), and wherein R, G, B respectively indicate the component value of red, green, blue.It asks
It calculatesWhen six partial derivatives, sobel operator is used;
S3.2.2: it asks
S3.2.3: it asksWherein arctan is Arctan function;
S3.2.4: it asksWith
S3.2.5: ifIt is greater than or equal toIt takesFor gradient amplitude F (x, y), θ1For gradient Angle, θ (x, y), otherwise takesFor gradient amplitude F (x, y), θ2For gradient angle, θ (x, y), storage obtains gradient respectively Map of magnitudes VG and gradient angle figure A.
Preferably, step S3.3 the following steps are included:
S3.3.1: if bestBackgroundDetectFlat is very, to detect in bestBackgroundDetect The value at any point, if " 1 ", the point value for resetting same position in gradient amplitude figure VG is " 0 ";The step for prevented background Certain pixels in region are mixed into the possibility of prospect again after enhancing;
S3.3.2: in statistical gradient map of magnitudes VG, the demarcation threshold of the part point of pixel value maximum 1% is divided High divides the demarcation threshold low of the part point of pixel value the smallest 1%;All pixels value is greater than the point of high, resetting For high;All pixels value is less than the point of low, resets to low;The step eliminates pixel value more pole in gradient amplitude figure The point at end is conducive to subsequent OTSU segmentation;
S3.3.3: in gradient angle figure A, standard is asked using the disk that radius is small scale as contiguous range to every bit Difference arrives standard difference image stdOfA;Standard deviation in subrange is smaller, indicates the one of the angle of the gradient vector near the point Cause property is higher;
S3.3.4: to the pixel value α of the every bit on gradient amplitude figure VG, α=α/(stdOfA+0.1) is reset.It is acted on It is the vein enhanced in VG.
Preferably, S3.4 the following steps are included:
S3.4.1: (para × 2) of the length that a block length is gradient amplitude figure VG are cut in the gradient amplitude center figure VG Times, width also accounts for the subgraph VGCrop of same ratio, and four frames of subgraph are parallel with four frames of VG respectively, the central point of subgraph It is overlapped with the central point of VG;From the purpose of center sampling threshold value, threshold value is desirable to not by the interference of leaf exterior domain pixel value;
S3.4.2: the maximum variance between clusters segmentation threshold level of VGCrop is sought;
S3.4.3: using level as threshold value, Threshold segmentation is carried out to gradient amplitude figure VG, obtains OtsuBW;The purpose of this measure It is to seek vein.
S3.4.4: connected region too small in OtsuBW is deleted;
S3.4.5: OtsuBW is expanded, it is therefore an objective to which the vein of fracture is connected;
Preferably, step S3.5 the following steps are included:
S3.5.1: copy backup foregroundMask is foregroundMaskBackup;
S3.5.2: foregroundMask or OtsuBW, it is stored as foregroundMask;
S3.5.3: on foregroundMask, selection retains the region being connected to image center, deletes other areas Domain;
S3.5.4: copy backup foregroundMask is foregroundMaskForDel;
S3.5.5: being the point of " 1 " either with or without value, before falling on detection foregroundMask It in nearBoundaryRowDistance row, or falls in rear nearBoundaryRowDistance row, or before falling in It in nearBoundaryColDistance column, or falls in rear nearBoundaryColDistance column, if nothing, it is believed that newly simultaneously The foreground area entered is farther out from framing mask, it is more likely that is the main lobe arteries and veins across image middle section, purpose reaches, and turns to S3.6;
S3.5.6: carrying out small scale corrosion to foregroundMask, and then selection reservation is connected to image center Other regions are deleted in region;
S3.5.7: being the point of " 1 " either with or without value, before falling in further on detection foregroundMask In veryNearBoundaryRowDistance row, or fall in rear veryNearBoundaryRowDistance row, or fall In preceding veryNearBoundaryColDistance column, or fall in rear veryNearBoundaryColDistance column, If so, then showing that framing mask is approached in display foreground region very much, it is meant that the corrosion of front does not achieve the purpose that trimming, It needs to corrode again;If nothing, S3.5.11 is turned to;
S3.5.8: carrying out small scale corrosion to foregroundMask, and then selection reservation is connected to image center Other regions are deleted in region;
S3.5.9: being the point of " 1 " either with or without value, before falling in further on detection foregroundMask In veryNearBoundaryRowDistance row, or fall in rear veryNearBoundaryRowDistance row, or fall In preceding veryNearBoundaryColDistance column, or fall in rear veryNearBoundaryColDistance column, If so, then showing that framing mask is still approached in display foreground region very much, it is meant that front etching operation is again without reaching mesh , need specially treated;If nothing, S3.5.11 is turned to;
S3.5.10: in foregroundMask preceding dodgeBoundaryRowDistance row, after DodgeBoundaryRowDistance row, preceding dodgeBoundaryColDistance column, after DodgeBoundaryColDistance column, all reset to " 0 ";
S3.5.11: it foregroundMask or upper foregroundMaskBackup, is stored as ForegroundMask completes being incorporated to for main lobe arteries and veins;
Preferably, step S3.6 the following steps are included:
S3.6.1: in OtsuBW, the foreground area in foregroundMaskForDel is deleted, scrappy vein is saved as Candidate figure candidates, the deletion are as follows: to any point in OtsuBW, if same in foregroundMaskForDel The point value of position is " 1 ", then the point in OtsuBW is reset to " 0 ";
S3.6.2: scanning full figure on candidates, finds out each region from the nearest distance DN of image center; If DN is less than nearDistance, make marks, and record the line number row and row number of point nearest from image center in the region Col, while judging whether horizontal distance of each region from four frames of image is less than Whether nearBoundaryRowDistance, vertical range are less than nearBoundaryColDistance, if so, just marking The region is too close from frame;
S3.6.3: copy backup candidates is avoidRegions;
S3.6.4: it is all deleted the region too close from frame is marked as in candidates;
S3.6.5: being less than each in candidates in the region of nearDistance with a distance from image center, It draws a point from the nearest image center in the region with having evacuation, has been found out in line number row number S3.6.2, is directed toward in image The line segment of heart point, evacuation rule is: the coordinate at line segment any point is set as (x, y), if avoidRegions (x, y) is " 1 ", together When the point be not line segment starting point, then cancel the line segment, in addition, in line drawing process, if detecting foregroundMask (x, y) For " 1 ", then it is assumed that be connected to foregroundMask, task is completed;
S3.6.6: foregroundMask or upper candidates, foregroundMask is saved as;
S3.6.7: selection retains the region being connected to image center in foregroundMask.
Preferably, S4 the following steps are included:
S4.1: if bestBackgroundDetectFlat be it is true, it is first that bestBackgroundDetect is slight Be expanded to bestBackgroundDetectFat, then in foregroundMask, delete with The region of bestBackgroundDetectFlat overlapping, and foregroundMask, reservation are connected to image center Other regions are deleted in region, finally, setting backgroundMask equal to bestBackgroundDetect;
S4.2: being label with the result of backgroundMask or upper foregroundMask, and make marks watershed segmentation, Obtain outPutImage;
S4.3: using the region that outPutImage acceptance of the bid mark is " 2 " as prospect, remaining is obtained final as background Binary segmentation result logicImage.
Compared with prior art, the beneficial effect of technical solution of the present invention is:
The present invention directly seeks gradient on color image, then catches leaf area to contain this feature of vein, passes through Vein is effectively enhanced and extracted, more accurately prospect label figure is obtained.It is real finally using label dividing ridge method The segmentation of existing image.The present invention overcomes the problems of complex background leaf image full-automatic dividing, have pushed complex background blade The development of image recognition.For the full-automatic image segmentation problem of the textured train of thought of single goal under complex background and target area, Also there is certain reference significance.
Detailed description of the invention
Fig. 1 is flow diagram of the invention.
Fig. 2 is the result schematic diagram of preposition background segment in S1, wherein (a) is blade original image, (b) S component map segmentation knot Fruit (c) is b component map segmentation result, (d) is revised final segmentation result, (e) is H component map, (f) is S component map, (g) it is a component map, (h) is b component map, be (i) H component map after OTSU, be (j) S component map after OTSU, is (k) a points Spirogram is (l) b component map after OTSU after OTSU.
Fig. 3 is result schematic diagram of Lonicerae Confusae (Sweet.) the DC. leaf image through S1.2, wherein (a) (b) it is b component map for original image, is (c) b component map after OTSU, is (d) result of S1.2 output.
Fig. 4 is result schematic diagram of the Clematis chinensis Osbeck leaf image through S1.2, wherein (a) is original Figure (b) is H component map, is (c) H component map after OTSU, is (d) result of S1.2 output.
Fig. 5 is result schematic diagram when detecting optimal context marker, wherein (a) is blade original image, (b) optimal background mark Note (c) marks for prospect, (d) is final segmentation result, (e) is H component map, (f) is S component map, (g) is a component map, (h) (i) it is H component map segmentation result for b component map, (j) is S component map segmentation result, (k) is a component map segmentation result, (l) It is (m) background detected from H component map for b component map segmentation result, is (n) background detected from S component map, (o) is The background detected from a component map, (p) background to be detected from b component map.
Fig. 6 is Polygonum chinense L. blade, wherein (a) is blade original image, it (b) is gradient amplitude figure, (c) It (f) is gradient angle after (e) removing extreme point for gradient amplitude figure after (d) going background for gradient amplitude figure for gradient angle figure Figure Local standard deviation (h) is central rectangular region (g) after for the enhancing of gradient amplitude figure, is (i) point of enhancing gradient amplitude figure It cuts as a result, (j) being to be (k) after expanding, be (m) scrappy vein (l) after for initial prospect and upper main lobe arteries and veins after deleting zonule Candidate's figure is (n) after deleting proximal border region, (o) is (p) to be incorporated to completion towards after central point line for scrappy veinlet.
Fig. 7 is main vein and fashionable result schematic diagram, is (c) the (b) to be just incorporated to main lobe arteries and veins wherein (a) is original image It (d) is final segmentation result after primary small cutting.
Fig. 8 be other plants blades segmentation result schematic diagram, wherein (a), (e), (i), (m) be respectively it is not of the same race The original image of class plant leaf blade (b) is (a) corresponding context marker figure, is (c) (a) corresponding prospect label figure, (d) right for (a) The segmentation result answered (f) is (e) corresponding context marker figure, is (g) (e) corresponding prospect label figure, (h) corresponding for (e) Segmentation result (j) is (i) corresponding context marker figure, is (k) (i) corresponding prospect label figure, (l) is (i) corresponding segmentation It is (o) (m) corresponding prospect label figure as a result, (n) being (m) corresponding context marker figure, is (p) (m) corresponding segmentation knot Fruit.
Specific embodiment
The attached figures are only used for illustrative purposes and cannot be understood as limitating the patent;
In order to better illustrate this embodiment, the certain components of attached drawing have omission, zoom in or out, and do not represent actual product Size;
To those skilled in the art, it is to be understood that certain known features and its explanation, which may be omitted, in attached drawing 's.
The following further describes the technical solution of the present invention with reference to the accompanying drawings and examples.
Embodiment 1
Present embodiment discloses a kind of full-automatic partition method of complex background leaf image, such as Fig. 1, including following step It is rapid:
S1: preposition simple division is carried out using maximum variance between clusters to original leaf image;
The original leaf image: being respectively converted into the representation of HIS, Lab model, obtains each component image by S2, Context marker is detected in the maximum variance between clusters segmentation result of each component image again, and according to preset standard, selection One of context marker is as optimal context marker;
S3: prospect label is detected on original leaf image;
S4: arranging prospect label and context marker, is split using label watershed segmentation methods, is finally divided Image.
Step S1 specifically includes the following steps:
S1.1: being contracted to preset size for the original leaf image of colored RGB, obtain image currentImage, tool Body are as follows:
Calculate ratio=(1200 × 900)/(line number of original leaf image × original leaf image columns);
If ratio is reduced less than 1, image by multiple of the value of the extraction of square root of ratio, if ratio is not less than 1, nothing It needs to reduce;Obtain image currentImage;In practice, if desired it then can adjust the calculating ginseng of ratio using other scales Number;
S1.2: being respectively converted into image currentImage the representation of HIS, Lab model, judges in H component map Foreground color and background color whether there is excessive difference in picture, S component image, a component image, b component image, if so, Corresponding winning coefficient coef and judgement symbol flat is recorded, segmented image BW is exported;Specifically:
S1.2.1: to each component image, divided using maximum variance between clusters (OTSU), obtain corresponding segmented image BW;
S1.2.2: in the pixel for calculating each tetra- frame of segmented image BW, it is worth the pixel proportion frameCof for " 1 ";
S1.2.3: the quantity area for the pixel that each segmented image BW intermediate value is " 1 " is calculated;
S1.2.4: in each segmented image BW, all small area regions, the maximum sole zone of Retention area are deleted;
S1.2.5: the image BW intermediate value after calculating step S1.2.4 is the quantity targetArea of the pixel of " 1 ";
S1.2.6: using the ratio of targetArea and area as foregroundCoef;
S1.2.7: if frameCof<0.1 and foregroundCoef>0.9, then it is assumed that be that foreground and background color exists Excessive difference, setting flat are 1, and using (1-frameCof) * foregroundCoef as winning coefficient coef;If FrameCof >=0.1 or foregroundCoef≤0.9 negate the image BW after S1.2.4, reform S1.2.2 to S1.2.6 Step, then judge whether frameCof<0.1 and foregroundCoef>0.9, if so, setting flat is 1, with (1- FrameCof) for * foregroundCoef as winning coefficient coef, it is 0 that flat, which is otherwise arranged,;
In Fig. 2, the flat of S component image segmentation result is 1, coef 0.9536, the BW of output such as Fig. 2 (b);B points The flat for measuring image segmentation result is 1, coef 0.8722, the BW of output such as Fig. 2 (c).The flat of remaining component image is 0.In Fig. 2, select flat for the segmented image BW of the maximum S component image of 1 and coef, that is, Fig. 2 (b).
S1.3: if the judgement symbol flat that each component image obtains in step S1.2 is 0, S2 is entered step; If there is more than one judgement symbol flat, take the corresponding segmented image BW of winning coefficient coef the maximum as image logicImage;
S1.4: the case where foreground area of detection image logicImage is stretched over four frames is at how many, if more than 3 Place, enters step S2;If being no more than at 3, S1.5 is entered step;
In general, it when shooting blade, complete as far as possible can shoot, the unlikely feelings for blade occur and touching frame Condition.Petiole is too long once in a while or blade tip is more elongated, has been stretched over frame, that has been the receiving under very big tolerance.Thus, such as Fruit occurs touching frame being more than 3 times, this logicImage certainly cannot be used as segmentation result.
Such as Lonicerae Confusae (Sweet.) DC. leaf image in Fig. 3 (a), it as Fig. 3 (b) b component Figure, obtains the binary map such as Fig. 3 (c) after OTSU, then after chosen maximum foreground area, judgement meet frameCof < 0.1 and (it is 0.9985) output such as Fig. 3 that frameCof, which is 0.0970, foregroundCoef, in this example for foregroundCoef > 0.9 (d) segmentation result.Then, which is chosen as logicImage.It will be clear that there is the company of many places on the result and boundary It connects, segmentation effect is simultaneously bad.This problem is found, which is abandoned.
S1.5: at the middle part of image logicImage, cutting 0.6 times that a block length is image logicImage length, Width also accounts for the subgraph logicImageCrop of same ratio, four frames of subgraph logicImageCrop respectively with image Four frames of logicImage are parallel, and the central point of the two is overlapped;Calculate the picture that subgraph logicImageCrop intermediate value is " 1 " The number logicImageCropArea of vegetarian refreshments;Calculate the number for the pixel that logicImage intermediate value is " 1 " LogicImageArea calculates the ratio of logicImageCropArea and logicImageArea, if being not more than 0.5, enters Step S2;If more than 0.5, S1.6 is entered step;
The purpose of this step is in detection logicImage, and whether foreground area is too small and is located near framing mask, if Belong to such case, then abandons this segmentation result.Because when shooting leaf image, often blade can all be shot in the picture Centre, and most of range of image can be occupied, without aforementioned situation.
Such as the Clematis chinensis Osbeck leaf image in Fig. 4 (a), it as Fig. 4 (b) H component map, Obtain the binary map such as Fig. 4 (c) after OTSU, then after chosen maximum foreground area, judgement meet frameCof < 0.1 and (it is 0.9993) output such as Fig. 4 that frameCof, which is 0.0817, foregroundCoef, in this example for foregroundCoef > 0.9 (d) segmentation result.Then, which is chosen as logicImage.It will be clear that the complete mistake of acquired results.Luckily it examines Measure it foreground area it is too small and be located at framing mask nearby and abandoned.
S1.6: closed operation is carried out to image logicImage;
S1.7: holes filling is carried out to the image logicImage after step S1.6, obtains segmentation result.
Such as the logicImage of Fig. 2 (b), S1.4 and S1.5 two detections are passed through, then repairing by S1.6 and S1.7 Just, the result such as Fig. 2 (d) has been obtained.
Preferably, step S2 specifically includes the following steps:
S2.1: credible prospect scale parameter para is set;
S2.2: respectively in H component image, S component image, a component image, each corresponding segmentation figure of b component image As detecting context marker, specifically including on BW:
S2.2.1: at the middle part of segmented image BW, para × 2 times that a block length is component image length, width are cut Also account for the subgraph credibleForeground of same ratio, four frames of the subgraph credibleForeground respectively with Four frames of segmented image BW are parallel, the central point of subgraph credibleForeground and the central point weight of segmented image BW It closes;This is because often blade can all be shot and be entreated in the picture, and most of model of image can be occupied when shooting leaf image It encloses;Thus, the larger small region that may cover image center of target blade.
S2.2.2: it calculates in subgraph credibleForeground, is worth the pixel proportion for " 1 " coefOfCredibleForeground;
S2.2.3: if coefOfCredibleForeground in section [0.2,0.8], then point of segmented image BW It cuts accurate sexual incompatibility and detects context marker in segmented image BW, terminate detection process, return to detection failed message;
S2.2.4: if coefOfCredibleForeground saves as image less than 0.2, segmented image BW The segmented image BW result negated is stored as image BW by backgroundCandidate;If CoefOfCredibleForeground is not less than 0.2, and the segmented image BW result negated is stored as image backgroundCandidate;
S2.2.5: in the pixel for calculating tetra- frame of image BW, it is worth the pixel proportion frameCof for " 1 ";
S2.2.6: if frameCof is greater than 0.6, then it is assumed that the segmentation accuracy of this image BW leaves a question open, and terminates this detection Process returns to detection failed message;
S2.2.7: the etching operation of mathematical morphology is carried out to image backgroundCandidate;
S2.2.8: the pixel of four frames of the image backgroundCandidate after S2.2.7 is set to " 1 ";
S2.2.9: on the image backgroundCandidate after S2.2.8, selection retain with the image upper left corner that Point connection region, other regions delete, be as a result denoted as image background, it is required that context marker;But the inside can It can there is also holes.Hole is eliminated by following three steps, is conducive to the time loss for saving label watershed segmentation.
S2.2.10: negating image background, saves as image reverseBackground;
S2.2.11: on image reverseBackground, selection retains the region being connected to image center, other Region is deleted, and image reverseBackground2 is as a result denoted as;
S2.2.12: the image reverseBackground2 result negated is saved as image background;
S2.2.13: at the middle part of image reverseBackground2, cutting a block length is component image length (para × 2) times, width also account for the subgraph credibleForegroundClean of same ratio, four frames of subgraph respectively with Four frames of original image are parallel, and the central point of subgraph is overlapped with the central point of original image;
S2.2.14: it calculates in subgraph credibleForegroundClean, is worth the pixel proportion for " 1 " coefOfCredibleForegroundClean;
S2.2.15: calculating backgroundCoef=1- (1.01-coefOfCredibleForegroundClean) × FrameCof returns to image background, backgroundCoef and detection success message;
S2.3: it in four context marker testing results that S2.2 is obtained, selects and detects successfully and backgroundCoef The image background of the maximum is registered as image bestBackgroundDetect BestBackgroundDetectFlat is very, if four context marker testing results all return to failure information, to register BestBackgroundDetectFlat is false;
S2.4: if bestBackgroundDetectFlat be it is true, to image bestBackgroundDetect into Row amendment, detailed process are as follows: first image bestBackgroundDetect is corroded, then the picture on its four frames Vegetarian refreshments is set to " 1 " entirely, and finally selection retains the region being connected to its upper left corner that point, deletes remaining region.
As shown in figure 5, Fig. 5 (a) is the original image of a Polygonum chinense L. blade.Fig. 5 (e) arrives Fig. 5 (h) It is successively H component map, S component map, a component map and b component map.Fig. 5 (i) to Fig. 5 (l) is then this four component maps by OTSU Binary map after segmentation.And Fig. 5 (m) to Fig. 5 (p) is respectively in four binary maps, four detected after S2.2 carry on the back Scape label figure.At the same time, also measuring corresponding backgroundCoef is respectively 0.9942,0.9972,0.9964 and 0.9986.That is, the context marker (Fig. 5 (p)) detected in the binary map of b component map is corresponding BackgroundCoef is maximum.So having selected the context marker for optimal context marker in S2.3, and carry out in S2.4 Amendment, has obtained the bestBackgroundDetect such as Fig. 5 (b).Next, also will test to obtain the prospect such as Fig. 5 (c) Label.Two tag images are finally combined, the segmentation result such as Fig. 5 (d) is obtained.
Step S3 the following steps are included:
S3.1: Initialize installation obtains initialization prospect label figure foregroundMask and initial background label figure backgroundMask;
S3.2: gradient is directly asked to color RGB image currentImage;
S3.3: vein enhancing;
S3.4: vein segmentation;
S3.5: main lobe arteries and veins is incorporated to;
S3.6: scrappy veinlet is incorporated to.
Step S3.1 the following steps are included:
S3.1.1: setting nearest neighbor distance criterion coefficient nearDistancePara is arranged nearest neighbor distance criterion NearDistance is the long and wide average value of image multiplied by nearDistancePara, and round;Setting close to Frame is arranged apart from criterion coefficient nearBoundaryPara close to frame line number criterion nearBoundaryRowDistance For image total line number multiplied by nearBoundaryPara, and round, setting is close to frame columns criterion NearBoundaryColDistance is total columns of image multiplied by nearBoundaryPara, and round;Setting Very close to frame apart from criterion coefficient veryNearBoundaryPara, it is arranged very close to frame line number criterion VeryNearBoundaryRowDistance is total line number of image multiplied by veryNearBoundaryPara, and is rounded up Be rounded, be arranged very close to frame columns criterion veryNearBoundaryColDistance be image total columns multiplied by VeryNearBoundaryPara, and round;Setting evacuation frame distance coefficient dodgeBoundaryPara (example Such as 0.2, its value has to be larger than nearBoundaryPara), setting evacuation frame line number DodgeBoundaryRowDistance is total line number of image multiplied by dodgeBoundaryPara, and round, if Setting evacuation frame columns dodgeBoundaryColDistance is total columns of image multiplied by dodgeBoundaryPara, and Round;
S3.1.2: initialization prospect label figure foregroundMask, specifically: a newly-built size and colour RGB scheme Picture currentImage is consistent, and pixel value is all the bianry image foregroundMask of " 0 ", in foregroundMask Minister's degree is (para × 2) times of the length of foregroundMask, and width also accounts for the pixel in the rectangular area of same ratio Value is reset to " 1 ", and four frames of the rectangle are parallel with four frames of foregroundMask respectively, and the central point of the two also weighs It closes;
S3.1.3: initial background label figure backgroundMask, specifically: a newly-built size and colour RGB scheme Picture currentImage is consistent, and pixel value is all the bianry image backgroundMask of " 0 ", tetra- side backgroundMask Pixel value on frame resets to " 1 ".
Step S3.2 the following steps are included:
S3.2.1: seek the partial derivative in the direction x and y: the coordinate for enabling any point on image currentImage is (x, y), Pixel value is (R, G, B), and wherein R, G, B respectively indicate the component value of red, green, blue.It asks
It calculatesWhen six partial derivatives, sobel operator is used;
S3.2.2: it asks
S3.2.3: it asksWherein arctan is Arctan function;
S3.2.4: it asksWith
S3.2.5: ifIt is greater than or equal toIt takesFor gradient amplitude F (x, y), θ1For gradient Angle, θ (x, y), otherwise takesFor gradient amplitude F (x, y), θ2For gradient angle, θ (x, y), storage obtains gradient respectively Map of magnitudes VG and gradient angle figure A.
If Fig. 6 (a) is the color image of Polygonum chinense L. blade, gradient is directly asked by it, is obtained as schemed The gradient amplitude figure and gradient angle figure of 6 (b) and Fig. 6 (c).Although from Fig. 6 (b) as it can be seen that some dim, phases of the vein of blade To clear, outside blade background area, brightness is lower.
Step S3.3 the following steps are included:
S3.3.1: if bestBackgroundDetectFlat is very, to detect in bestBackgroundDetect The value at any point, if " 1 ", the point value for resetting same position in gradient amplitude figure VG is " 0 ";The step for prevented background Certain pixels in region are mixed into the possibility of prospect again after enhancing, such as Fig. 6 (d);
S3.3.2: in statistical gradient map of magnitudes VG, the demarcation threshold of the part point of pixel value maximum 1% is divided High divides the demarcation threshold low of the part point of pixel value the smallest 1%;All pixels value is greater than the point of high, resetting For high;All pixels value is less than the point of low, resets to low;The step eliminates pixel value more pole in gradient amplitude figure The point at end is conducive to subsequent OTSU segmentation, and as a result as shown in Fig. 6 (e), image is more soft.
S3.3.3: in gradient angle figure A, standard is asked using the disk that radius is small scale as contiguous range to every bit Difference arrives standard difference image stdOfA;Standard deviation in subrange is smaller, indicates the one of the angle of the gradient vector near the point Cause property is higher, and such as Fig. 6 (f), intermediate dim lines have corresponded to main lobe arteries and veins.
S3.3.4: to the pixel value α of the every bit on gradient amplitude figure VG, α=α/(stdOfA+0.1) is reset.It is acted on It is the vein enhanced in VG.Such as Fig. 6 (g), for the result after enhancing vein.Compared to Fig. 6 (e), vein is really bright.But It also seen that blade boundary is also enhanced.Some points of blade exterior are also enhanced.
S3.4 the following steps are included:
S3.4.1: (para × 2) of the length that a block length is gradient amplitude figure VG are cut in the gradient amplitude center figure VG Times, width also accounts for the subgraph VGCrop of same ratio, and four frames of subgraph are parallel with four frames of VG respectively, the central point of subgraph It is overlapped with the central point of VG, as shown in Fig. 6 (h);From the purpose of center sampling threshold value, threshold value is desirable to not by leaf exterior domain The interference of pixel value;
S3.4.2: the maximum variance between clusters segmentation threshold level of VGCrop is sought;
S3.4.3: using level as threshold value, Threshold segmentation is carried out to gradient amplitude figure VG, obtains OtsuBW;The purpose of this measure It is to seek vein, as shown in Fig. 6 (i).
S3.4.4: deleting connected region too small in OtsuBW, and deleting area is 10 and region below, such as Fig. 6 (j) institute Show;
S3.4.5: OtsuBW is expanded, it is therefore an objective to the vein of fracture be connected, as shown in Fig. 6 (k);
Step S3.5 the following steps are included:
S3.5.1: copy backup foregroundMask is foregroundMaskBackup;
S3.5.2: foregroundMask or OtsuBW, it is stored as foregroundMask;
S3.5.3: on foregroundMask, selection retains the region being connected to image center, deletes other areas Domain, as a result as shown in Fig. 6 (l).;
S3.5.4: copy backup foregroundMask is foregroundMaskForDel;
S3.5.5: being the point of " 1 " either with or without value, before falling on detection foregroundMask It in nearBoundaryRowDistance row, or falls in rear nearBoundaryRowDistance row, or before falling in It in nearBoundaryColDistance column, or falls in rear nearBoundaryColDistance column, if nothing, it is believed that newly simultaneously The foreground area entered is farther out from framing mask, it is more likely that is the main lobe arteries and veins across image middle section, purpose reaches, and turns to S3.6;Such as Fig. 6 (l), proximal border just is leaned on without prospect, thus in the example, directly to S3.6, which is also that main lobe arteries and veins is incorporated to Result afterwards.
S3.5.6: carrying out small scale corrosion to foregroundMask, and then selection reservation is connected to image center Other regions are deleted in region;The effect of the step is: main lobe arteries and veins is too long, has reached the place of neighbour's image boundary, and Yao Tifang is It is mistakenly connected to some regions outside leaf, thus needs to cut it.Such as the Datura that original image is Fig. 7 (a) The leaf image of metel L., shown in prospect label figure such as Fig. 7 (b) after being incorporated to main lobe arteries and veins, main lobe arteries and veins has reached image side Boundary is nearby (in fact, having had reached four frame of image), it is obviously desirable to reduce.So in the example, after executing S3.5.5, after Continuous be transferred to executes S3.5.6.After reduction, Fig. 7's (c) as a result, original main lobe arteries and veins being incorporated to is retained substantially has been obtained.Afterwards Come, obtains the final segmentation result such as Fig. 7 (d).
S3.5.7: being the point of " 1 " either with or without value, before falling in further on detection foregroundMask In veryNearBoundaryRowDistance row, or fall in rear veryNearBoundaryRowDistance row, or fall In preceding veryNearBoundaryColDistance column, or fall in rear veryNearBoundaryColDistance column, If so, then showing that framing mask is approached in display foreground region very much, it is meant that the corrosion of front does not achieve the purpose that trimming, It needs to corrode again;If nothing, S3.5.11 is turned to;
S3.5.8: foregroundMask is carried out using the disk that radius is 3 being structural element corrosion, then selection is protected The region being connected to image center is stayed, other regions are deleted;
S3.5.9: being the point of " 1 " either with or without value, before falling in further on detection foregroundMask In veryNearBoundaryRowDistance row, or fall in rear veryNearBoundaryRowDistance row, or fall In preceding veryNearBoundaryColDistance column, or fall in rear veryNearBoundaryColDistance column, If so, then showing that framing mask is still approached in display foreground region very much, it is meant that front etching operation is again without reaching mesh , need specially treated;If nothing, S3.5.11 is turned to;
S3.5.10: in foregroundMask preceding dodgeBoundaryRowDistance row, after DodgeBoundaryRowDistance row, preceding dodgeBoundaryColDistance column, after DodgeBoundaryColDistance column, all reset to " 0 ";
S3.5.11: it foregroundMask or upper foregroundMaskBackup, is stored as ForegroundMask completes being incorporated to for main lobe arteries and veins;
Step S3.6 includes the following steps
S3.6.1: in OtsuBW, the foreground area in foregroundMaskForDel is deleted, scrappy vein is saved as Candidate figure candidates, the deletion are as follows: to any point in OtsuBW, if same in foregroundMaskForDel The point value of position is " 1 ", then the point in OtsuBW is reset to " 0 ";As shown in Fig. 6 (m)
S3.6.2: scanning full figure on candidates, finds out each region from the nearest distance DN of image center; If DN is less than nearDistance, make marks, and record the line number row and row number of point nearest from image center in the region Col, while judging whether horizontal distance of each region from four frames of image is less than Whether nearBoundaryRowDistance, vertical range are less than nearBoundaryColDistance, if so, just marking The region is too close from frame;
S3.6.3: copy backup candidates is avoidRegions;
S3.6.4: it is all deleted the region too close from frame is marked as in candidates, as shown in Fig. 6 (n);
S3.6.5: being less than each in candidates in the region of nearDistance with a distance from image center, It draws a point from the nearest image center in the region with having evacuation, has been found out in line number row number S3.6.2, is directed toward in image The line segment of heart point, evacuation rule is: the coordinate at line segment any point is set as (x, y), if avoidRegions (x, y) is " 1 ", together When the point be not line segment starting point, then cancel the line segment, in addition, in line drawing process, if detecting foregroundMask (x, y) For " 1 ", then it is assumed that it has been connected to foregroundMask, task is completed, is not needed further to draw down to save the time, Shown in gained candidates such as Fig. 6 (o);
S3.6.6: foregroundMask or candidates, foregroundMask is saved as;
S3.6.7: selection retains the region being connected to image center in foregroundMask.
S4 the following steps are included:
S4.1: if bestBackgroundDetectFlat is very, first bestBackgroundDetect with half The disk operator slight expansion that diameter is 3 is bestBackgroundDetectFat, then in foregroundMask, is deleted The region Chong Die with bestBackgroundDetectFlat, and to foregroundMask, reservation is connected to image center Region, other regions are deleted, finally, setting backgroundMask equal to bestBackgroundDetect;
S4.2: being label with the result of backgroundMask or upper foregroundMask, and make marks watershed segmentation, Obtain outPutImage;Wherein, the example of backgroundMask is shown in Fig. 5 (b), and the example of foregroundMask is shown in Fig. 5 (c)。
S4.3: using the region that outPutImage acceptance of the bid mark is " 2 " as prospect, remaining is obtained final as background Binary segmentation result logicImage, as shown in Fig. 5 (d).
More citings are shown, please refer to Fig. 8.The original image, preceding of other 4 complex background leaf images is given in Fig. 8 Scape label figure (foregroundMask described in S4.2), context marker figure (backgroundMask described in S4.3) and segmentation As a result.
In order to evaluate segmentation result, the observation caliber (observation of (well known) known to 5 criteria).Wherein, TP (True Positive) expression was positive sample (point in leaf area) originally, was classified the sample that is positive The quantity of example;TN (True Negative) expression was negative sample (background dot) originally, was classified the quantity for the sample that is negative;FP (False Positive) expression was negative sample originally, is classified the quantity for the sample that is positive, also commonly known as reports by mistake;FN(False Negative it) indicates to be positive sample originally, is classified the quantity for the sample that is negative, also commonly known as fails to report.
Acquire 88 types, every kind image 100~115.About information such as the kind class name of plant and medicine names, ginseng is please referred to Examine document.
In order to verify the validity of partitioning algorithm, from 88 kinds of leaf images, every kind randomly selects 5 images;And one by one It is artificial to determine Standard Segmentation result as reference.To form Database 1.8 images therein are as shown in Fig.1.And with Original image encloses the segmentation result of this algorithm.
Later, in order to be trained to deep learning algorithm, and from 88 kinds of leaf images, every kind is excluding to be selected in 5 images are randomly selected after the image of Database 1;And Standard Segmentation result is manually determined one by one.To composition and test set The corresponding training set of Database 1, is denoted as Database 0.
In order to directly be compared with existing mainstream algorithm, before this use traditional OTSU, MeanShift, The methods of GraphCut has carried out segmentation test to 440 images in Database 1 respectively.Related result is successively recorded in 1 to 3 rows of Table 1.
Then, we further use deep learning dividing method FCN.Imagenet-vgg-verydeep-16 is selected to make The instruction of 50 epochs has been done to FCN using the Database 0 of previously described mistake as training set for embedded depth network Practice, segmentation test then has been carried out to Database 1, has obtained good result.Related result is recorded in the 4th of Table 1 Row.
Then, using this method, 440 complex background leaf images in Database 1 are split.Institute's score Cut the 5th row (indicating with runic protrusion) that result is recorded in Table 1.
Supplementary explanation, since MeanShift is when handling high-definition picture, the too long (processing one of runing time The image that resolution ratio is 400 × 300, spending the time is more than a hour), so compression of images to 200 × 150 is got off place Reason.And FCN is trained (memory and time factor) due to that can not receive high-definition picture, so compression of images is arrived 400 × 300 get off processing.Other algorithms, then default is tested under image resolution ratio 1200 × 900.Under the resolution ratio, image Details, such as villus or the spinule of blade edge, can retain well.
From the 1 to 5th row Comparative result of Table 1 as it can be seen that compared to three kinds conventional methods of this algorithm, have apparent excellent Victory.Deep learning dividing method FCN is compared, it is also slightly better.
TABLE 1
AVERAGE RESULT BASED ON DATABASE 1
The same or similar label correspond to the same or similar components;
The terms describing the positional relationship in the drawings are only for illustration, should not be understood as the limitation to this patent;
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair The restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above description To make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all this Made any modifications, equivalent replacements, and improvements etc., should be included in the claims in the present invention within the spirit and principle of invention Protection scope within.

Claims (10)

1. a kind of full-automatic partition method of complex background leaf image, which comprises the following steps:
S1: preposition simple division is carried out using maximum variance between clusters to original leaf image;
The original leaf image: being respectively converted into the representation of HIS, Lab model by S2, obtains each component image, then Context marker is detected in the maximum variance between clusters segmentation result of each component image, and according to preset standard, selection is wherein One context marker is as optimal context marker;
S3: prospect label is detected on original leaf image;
S4: arranging prospect label and context marker, is split using label watershed segmentation methods, obtains final segmentation figure Picture.
2. the full-automatic partition method of complex background leaf image according to claim 1, which is characterized in that step S1 tool Body the following steps are included:
S1.1: being contracted to preset size for the original leaf image of colored RGB, obtain image currentImage, specifically:
Calculate ratio=(1200 × 900)/(line number of original leaf image × original leaf image columns);
If ratio is reduced less than 1, image by multiple of the value of the extraction of square root of ratio, if ratio is not less than 1, without contracting It is small;Obtain image currentImage;
S1.2: being respectively converted into image currentImage the representation of HIS, Lab model, judges in H component image, S Component image, a component image, foreground color and background color whether there is excessive difference in b component image, if so, record Corresponding winning coefficient coef and judgement symbol flat exports segmented image BW;Specifically:
S1.2.1: to each component image, being divided using maximum variance between clusters, obtains corresponding segmented image BW;
S1.2.2: in the pixel for calculating each tetra- frame of segmented image BW, it is worth the pixel proportion frameCof for " 1 ";
S1.2.3: the quantity area for the pixel that each segmented image BW intermediate value is " 1 " is calculated;
S1.2.4: in each segmented image BW, all small area regions, the maximum sole zone of Retention area are deleted;
S1.2.5: the image BW intermediate value after calculating step S1.2.4 is the quantity targetArea of the pixel of " 1 ";
S1.2.6: using the ratio of targetArea and area as foregroundCoef;
S1.2.7: if frameCof<0.1 and foregroundCoef>0.9, then it is assumed that be that there are excessive for foreground and background color Difference, setting flat is 1, and using (1-frameCof) * foregroundCoef as winning coefficient coef;If frameCof >=0.1 or foregroundCoef≤0.9, the image BW after S1.2.4 is negated, reforms S1.2.2 to S1.2.6 step, then judge Whether frameCof<0.1 and foregroundCoef>0.9, if so, setting flat be 1, with (1-frameCof) * For foregroundCoef as winning coefficient coef, it is 0 that flat, which is otherwise arranged,;
S1.3: if the judgement symbol flat that each component image obtains in step S1.2 is 0, S2 is entered step;If having More than one judgement symbol flat then takes the corresponding segmented image BW of winning coefficient coef the maximum as image logicImage;
S1.4: the case where foreground area of detection image logicImage is stretched over four frames be at how many, if at more than 3, into Enter step S2;If being no more than at 3, S1.5 is entered step;
S1.5: at the middle part of image logicImage, 0.6 times that a block length is image logicImage length, width are cut Also account for the subgraph logicImageCrop of same ratio, four frames of subgraph logicImageCrop respectively with image Four frames of logicImage are parallel, and the central point of the two is overlapped;Calculate the picture that subgraph logicImageCrop intermediate value is " 1 " The number logicImageCropArea of vegetarian refreshments;Calculate the number for the pixel that logicImage intermediate value is " 1 " LogicImageArea calculates the ratio of logicImageCropArea and logicImageArea, if being not more than 0.5, enters Step S2;If more than 0.5, S1.6 is entered step;
S1.6: closed operation is carried out to image logicImage;
S1.7: holes filling is carried out to the image logicImage after step S1.6, obtains segmentation result.
3. the full-automatic partition method of complex background leaf image according to claim 2, which is characterized in that step S2 tool Body the following steps are included:
S2.1: credible prospect scale parameter para is set;
S2.2: the side between H component image, S component image, a component image, each corresponding maximum kind of b component image respectively On poor method segmented image BW, context marker is detected, is specifically included:
S2.2.1: at the middle part of segmented image BW, para × 2 times that a block length is component image length are cut, width also accounts for The subgraph credibleForeground of same ratio, four frames of the subgraph credibleForeground respectively with segmentation Four frames of image BW are parallel, and the central point of subgraph credibleForeground is overlapped with the central point of segmented image BW;
S2.2.2: it calculates in subgraph credibleForeground, is worth the pixel proportion for " 1 " coefOfCredibleForeground;
S2.2.3: if coefOfCredibleForeground in section [0.2,0.8], then the segmentation of segmented image BW is quasi- True sexual incompatibility detects context marker in segmented image BW, terminates detection process, returns to detection failed message;
S2.2.4: if coefOfCredibleForeground saves as image less than 0.2, segmented image BW The segmented image BW result negated is stored as image BW by backgroundCandidate;If CoefOfCredibleForeground is not less than 0.2, and the segmented image BW result negated is stored as image backgroundCandidate;
S2.2.5: in the pixel for calculating tetra- frame of image BW, it is worth the pixel proportion frameCof for " 1 ";
S2.2.6: if frameCof is greater than 0.6, then it is assumed that the segmentation accuracy of this image BW leaves a question open, and terminates this detection process, Return to detection failed message;
S2.2.7: the etching operation of mathematical morphology is carried out to image backgroundCandidate;
S2.2.8: the pixel of four frames of the image backgroundCandidate after S2.2.7 is set to " 1 ";
S2.2.9: on the image backgroundCandidate after S2.2.8, selection retains to be connected with image upper left corner that point Logical region, other regions delete, be as a result denoted as image background, it is required that context marker;
S2.2.10: negating image background, saves as image reverseBackground;
S2.2.11: on image reverseBackground, selection retains the region being connected to image center, other regions It deletes, is as a result denoted as image reverseBackground2;
S2.2.12: the image reverseBackground2 result negated is saved as image background;
S2.2.13: at the middle part of image reverseBackground2, (the para that a block length is component image length is cut × 2) times, width also accounts for the subgraph credibleForegroundClean of same ratio, four frames of subgraph respectively with original image Four frames are parallel, and the central point of subgraph is overlapped with the central point of original image;
S2.2.14: it calculates in subgraph credibleForegroundClean, is worth the pixel proportion for " 1 " coefOfCredibleForegroundClean;
S2.2.15: calculating backgroundCoef=1- (1.01-coefOfCredibleForegroundClean) × FrameCof returns to image background, backgroundCoef and detection success message;
S2.3: it in four context marker testing results that S2.2 is obtained, selects and detects successfully and backgroundCoef is maximum The image background of person registers bestBackgroundDetectFlat as image bestBackgroundDetect It is that very, if four context marker testing results all return to failure information, it is false for registering bestBackgroundDetectFlat;
S2.4: if bestBackgroundDetectFlat is very, to repair to image bestBackgroundDetect Just, detailed process are as follows: first image bestBackgroundDetect is corroded, then the pixel on its four frames It is set to " 1 " entirely, finally selection retains the region being connected to its upper left corner that point, deletes remaining region.
4. the full-automatic partition method of complex background leaf image according to claim 3, which is characterized in that step S3 packet Include following steps:
S3.1: Initialize installation obtains initialization prospect label figure foregroundMask and initial background label figure backgroundMask;
S3.2: gradient is directly asked to color RGB image currentImage;
S3.3: vein enhancing;
S3.4: vein segmentation;
S3.5: main lobe arteries and veins is incorporated to;
S3.6: scrappy veinlet is incorporated to.
5. the full-automatic partition method of complex background leaf image according to claim 4, which is characterized in that step S3.1 The following steps are included:
S3.1.1: setting nearest neighbor distance criterion coefficient nearDistancePara is arranged nearest neighbor distance criterion nearDistance It is the long and wide average value of image multiplied by nearDistancePara, and round;It is arranged close to frame apart from criterion The head office close to frame line number criterion nearBoundaryRowDistance for image is arranged in coefficient nearBoundaryPara It counts multiplied by nearBoundaryPara, and round, is arranged close to frame columns criterion NearBoundaryColDistance is total columns of image multiplied by nearBoundaryPara, and round;Setting Very close to frame apart from criterion coefficient veryNearBoundaryPara, it is arranged very close to frame line number criterion VeryNearBoundaryRowDistance is total line number of image multiplied by veryNearBoundaryPara, and is rounded up Be rounded, be arranged very close to frame columns criterion veryNearBoundaryColDistance be image total columns multiplied by VeryNearBoundaryPara, and round;Setting evacuation frame distance coefficient dodgeBoundaryPara (example Such as 0.2, its value has to be larger than nearBoundaryPara), setting evacuation frame line number DodgeBoundaryRowDistance is total line number of image multiplied by dodgeBoundaryPara, and round, if Setting evacuation frame columns dodgeBoundaryColDistance is total columns of image multiplied by dodgeBoundaryPara, and Round;
S3.1.2: initialization prospect label figure foregroundMask, specifically: a newly-built size and color RGB image CurrentImage is consistent, and pixel value is all the bianry image foregroundMask of " 0 ", the middle part of foregroundMask Length is (para × 2) times of the length of foregroundMask, and width also accounts for the pixel value in the rectangular area of same ratio It resets to " 1 ", four frames of the rectangle are parallel with four frames of foregroundMask respectively, and the central point of the two is also overlapped;
S3.1.3: initial background label figure backgroundMask, specifically: a newly-built size and color RGB image CurrentImage is consistent, and pixel value is all the bianry image backgroundMask of " 0 ", tetra- frame of backgroundMask On pixel value reset to " 1 ".
6. the full-automatic partition method of complex background leaf image according to claim 5, which is characterized in that step S3.2 The following steps are included:
Gradient is directly asked to color RGB image currentImage, obtains gradient amplitude image VG and gradient angular image A, is had Body process is:
S3.2.1: seek the partial derivative in the direction x and y: the coordinate for enabling any point on image currentImage is (x, y), pixel Value is (R, G, B), and wherein R, G, B respectively indicate the component value of red, green, blue.It asks
It calculatesWhen six partial derivatives, sobel operator is used;
S3.2.2: it asks
S3.2.3: it asksWherein arctan is anyway Cut function;
S3.2.4: it asksWith
S3.2.5: ifIt is greater than or equal toIt takesFor gradient amplitude F (x, y), θ1For gradient angle, θ (x, y) otherwise takesFor gradient amplitude F (x, y), θ2For gradient angle, θ (x, y), storage obtains gradient amplitude figure respectively VG and gradient angle figure A.
7. the full-automatic partition method of complex background leaf image according to claim 6, which is characterized in that step S3.3 The following steps are included:
S3.3.1: if bestBackgroundDetectFlat is very, to detect any in bestBackgroundDetect The value of a bit, if " 1 ", the point value for resetting same position in gradient amplitude figure VG is " 0 ";
S3.3.2: in statistical gradient map of magnitudes VG, the demarcation threshold high of the part point of pixel value maximum 1% is divided, is drawn Divide the demarcation threshold low of the part point of pixel value the smallest 1%;All pixels value is greater than the point of high, resets to high; All pixels value is less than the point of low, resets to low;
S3.3.3: in gradient angle figure A, seeking standard deviation using the disk that radius is small scale as contiguous range to every bit, To standard difference image stdOfA;
S3.3.4: to the pixel value α of the every bit on gradient amplitude figure VG, α=α/(stdOfA+0.1) is reset.
8. the full-automatic partition method of complex background leaf image according to claim 7, which is characterized in that S3.4 includes Following steps:
S3.4.1: cutting (para × 2) times of the length that a block length is gradient amplitude figure VG in the gradient amplitude center figure VG, wide Degree also accounts for the subgraph VGCrop of same ratio, and four frames of subgraph are parallel with four frames of VG respectively, the central point and VG of subgraph Central point be overlapped;
S3.4.2: the maximum variance between clusters segmentation threshold level of VGCrop is sought;
S3.4.3: using level as threshold value, Threshold segmentation is carried out to gradient amplitude figure VG, obtains OtsuBW;
S3.4.4: connected region too small in OtsuBW is deleted;
S3.4.5: OtsuBW is expanded.
9. the full-automatic partition method of complex background leaf image according to claim 8, which is characterized in that step S3.5 The following steps are included:
S3.5.1: copy backup foregroundMask is foregroundMaskBackup;
S3.5.2: foregroundMask or OtsuBW, it is stored as foregroundMask;
S3.5.3: on foregroundMask, selection retains the region being connected to image center, deletes other regions;
S3.5.4: copy backup foregroundMask is foregroundMaskForDel;
S3.5.5: being the point of " 1 " either with or without value, before falling on detection foregroundMask It in nearBoundaryRowDistance row, or falls in rear nearBoundaryRowDistance row, or before falling in It in nearBoundaryColDistance column, or falls in rear nearBoundaryColDistance column, if nothing, it is believed that newly simultaneously The foreground area entered is farther out from framing mask, it is more likely that is the main lobe arteries and veins across image middle section, purpose reaches, and turns to S3.6;
S3.5.6: carrying out small scale corrosion to foregroundMask, and then selection retains the region being connected to image center, Delete other regions;
S3.5.7: being the point of " 1 " either with or without value, before falling in further on detection foregroundMask In veryNearBoundaryRowDistance row, or fall in rear veryNearBoundaryRowDistance row, or fall In preceding veryNearBoundaryColDistance column, or fall in rear veryNearBoundaryColDistance column, If so, then showing that framing mask is approached in display foreground region very much, it is meant that the corrosion of front does not achieve the purpose that trimming, It needs to corrode again;If nothing, S3.5.11 is turned to;
S3.5.8: carrying out small scale corrosion to foregroundMask, and then selection retains the region being connected to image center, Delete other regions;
S3.5.9: being the point of " 1 " either with or without value, before falling in further on detection foregroundMask In veryNearBoundaryRowDistance row, or fall in rear veryNearBoundaryRowDistance row, or fall In preceding veryNearBoundaryColDistance column, or fall in rear veryNearBoundaryColDistance column, If so, then showing that framing mask is still approached in display foreground region very much, it is meant that front etching operation is again without reaching mesh , need specially treated;If nothing, S3.5.11 is turned to;
S3.5.10: in foregroundMask preceding dodgeBoundaryRowDistance row, after DodgeBoundaryRowDistance row, preceding dodgeBoundaryColDistance column, after DodgeBoundaryColDistance column, all reset to " 0 ";
S3.5.11: foregroundMask or upper foregroundMaskBackup, being stored as foregroundMask, complete At being incorporated to for main lobe arteries and veins;
Step S3.6 the following steps are included:
S3.6.1: in OtsuBW, the foreground area in foregroundMaskForDel is deleted, it is candidate to save as scrappy vein Scheme candidates, the deletion are as follows: to any point in OtsuBW, if same position in foregroundMaskForDel Point value be " 1 ", then the point in OtsuBW is reset to " 0 ";
S3.6.2: scanning full figure on candidates, finds out each region from the nearest distance DN of image center;If DN It less than nearDistance, makes marks, and records the line number row and row number col of point nearest from image center in the region, Judge whether horizontal distance of each region from four frames of image is less than nearBoundaryRowDistance simultaneously, hangs down Whether straight distance is less than nearBoundaryColDistance, if so, just marking the region too close from frame;
S3.6.3: copy backup candidates is avoidRegions;
S3.6.4: it is all deleted the region too close from frame is marked as in candidates;
S3.6.5: it is less than the region of nearDistance with a distance from image center for each in candidates, keeps away It draws a point from the nearest image center in the region with allowing, has been found out in line number row number S3.6.2, be directed toward image center Line segment, evacuation rule be: set the coordinate at line segment any point as (x, y), if avoidRegions (x, y) be " 1 ", simultaneously this Point is not the starting point of line segment, then cancels the line segment, in addition, foregroundMask (x, y) is if detecting in line drawing process " 1 ", then it is assumed that be connected to foregroundMask, task is completed;
S3.6.6: foregroundMask or upper candidates, foregroundMask is saved as;
S3.6.7: selection retains the region being connected to image center in foregroundMask.
10. the full-automatic partition method of complex background leaf image according to claim 9, which is characterized in that S4 includes Following steps:
S4.1: if bestBackgroundDetectFlat is very, first bestBackgroundDetect slight expansion For bestBackgroundDetectFat, then in foregroundMask, delete with The region of bestBackgroundDetectFlat overlapping, and foregroundMask, reservation are connected to image center Other regions are deleted in region, finally, setting backgroundMask equal to bestBackgroundDetect;
S4.2: being label with the result of backgroundMask or upper foregroundMask, make marks watershed segmentation, obtains outPutImage;
S4.3: using the region that outPutImage acceptance of the bid mark is " 2 " as prospect, remaining obtains final two-value as background Segmentation result logicImage.
CN201910683687.6A 2019-07-26 2019-07-26 Full-automatic segmentation method for complex background leaf image Active CN110443811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910683687.6A CN110443811B (en) 2019-07-26 2019-07-26 Full-automatic segmentation method for complex background leaf image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910683687.6A CN110443811B (en) 2019-07-26 2019-07-26 Full-automatic segmentation method for complex background leaf image

Publications (2)

Publication Number Publication Date
CN110443811A true CN110443811A (en) 2019-11-12
CN110443811B CN110443811B (en) 2020-06-26

Family

ID=68431864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910683687.6A Active CN110443811B (en) 2019-07-26 2019-07-26 Full-automatic segmentation method for complex background leaf image

Country Status (1)

Country Link
CN (1) CN110443811B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022267300A1 (en) * 2021-06-25 2022-12-29 上海添音生物科技有限公司 Method and system for automatically extracting target area in image, and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473550A (en) * 2013-09-23 2013-12-25 广州中医药大学 Plant blade image segmentation method based on Lab space and local area dynamic threshold
CN104050670A (en) * 2014-06-24 2014-09-17 广州中医药大学 Complicated background blade image segmentation method combining simple interaction and mark watershed
US20140334692A1 (en) * 2012-01-23 2014-11-13 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Device and method for detecting a plant against a background
CN104598908A (en) * 2014-09-26 2015-05-06 浙江理工大学 Method for recognizing diseases of crop leaves
CN104850822A (en) * 2015-03-18 2015-08-19 浙江大学 Blade identification method based on multi-characteristic fusion simple background
CN106127735A (en) * 2016-06-14 2016-11-16 中国农业大学 A kind of facilities vegetable edge clear class blade face scab dividing method and device
CN106296662A (en) * 2016-07-28 2017-01-04 北京农业信息技术研究中心 Maize leaf image partition method and device under field conditions
CN106683098A (en) * 2016-11-15 2017-05-17 北京农业信息技术研究中心 Segmentation method of overlapping leaf images
CN106910197A (en) * 2017-01-13 2017-06-30 广州中医药大学 A kind of dividing method of the complex background leaf image in single goal region
CN108564589A (en) * 2018-03-26 2018-09-21 江苏大学 A kind of plant leaf blade dividing method based on the full convolutional neural networks of improvement
CN109359653A (en) * 2018-09-12 2019-02-19 中国农业科学院农业信息研究所 A kind of cotton leaf portion adhesion scab image partition method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140334692A1 (en) * 2012-01-23 2014-11-13 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Device and method for detecting a plant against a background
CN103473550A (en) * 2013-09-23 2013-12-25 广州中医药大学 Plant blade image segmentation method based on Lab space and local area dynamic threshold
CN104050670A (en) * 2014-06-24 2014-09-17 广州中医药大学 Complicated background blade image segmentation method combining simple interaction and mark watershed
CN104598908A (en) * 2014-09-26 2015-05-06 浙江理工大学 Method for recognizing diseases of crop leaves
CN104850822A (en) * 2015-03-18 2015-08-19 浙江大学 Blade identification method based on multi-characteristic fusion simple background
CN106127735A (en) * 2016-06-14 2016-11-16 中国农业大学 A kind of facilities vegetable edge clear class blade face scab dividing method and device
CN106296662A (en) * 2016-07-28 2017-01-04 北京农业信息技术研究中心 Maize leaf image partition method and device under field conditions
CN106683098A (en) * 2016-11-15 2017-05-17 北京农业信息技术研究中心 Segmentation method of overlapping leaf images
CN106910197A (en) * 2017-01-13 2017-06-30 广州中医药大学 A kind of dividing method of the complex background leaf image in single goal region
CN108564589A (en) * 2018-03-26 2018-09-21 江苏大学 A kind of plant leaf blade dividing method based on the full convolutional neural networks of improvement
CN109359653A (en) * 2018-09-12 2019-02-19 中国农业科学院农业信息研究所 A kind of cotton leaf portion adhesion scab image partition method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LIWEN GAO等: "A neural network classifier based on prior evolution and iterative approximation used for leaf recognition", 《 2010 SIXTH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION》 *
P.DIVYA等: "Segmentation of Defected Regions in Leaves using K- Means and OTSU"s Method", 《2018 4TH INTERNATIONAL CONFERENCE ON ELECTRICAL ENERGY SYSTEMS (ICEES)》 *
王静: "基于图像处理技术的烟叶病害自动识别研究", 《中国优秀硕士学位论文全文数据库 农业科技辑》 *
高攀 等: "棉田复杂背景下棉花叶片分割方法", 《新疆农业科学》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022267300A1 (en) * 2021-06-25 2022-12-29 上海添音生物科技有限公司 Method and system for automatically extracting target area in image, and storage medium

Also Published As

Publication number Publication date
CN110443811B (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN106951836B (en) crop coverage extraction method based on prior threshold optimization convolutional neural network
CN107016405B (en) A kind of pest image classification method based on classification prediction convolutional neural networks
CN106248559B (en) A kind of five sorting technique of leucocyte based on deep learning
Gao et al. Fully automatic segmentation method for medicinal plant leaf images in complex background
CN111985536B (en) Based on weak supervised learning gastroscopic pathology image Classification method
CN108364280A (en) Structural cracks automation describes and width accurately measures method and apparatus
CN107644418B (en) Optic disk detection method and system based on convolutional neural networks
CN109284733A (en) A kind of shopping guide&#39;s act of omission monitoring method based on yolo and multitask convolutional neural networks
CN104978567B (en) Vehicle checking method based on scene classification
CN108388905B (en) A kind of Illuminant estimation method based on convolutional neural networks and neighbourhood context
CN107506770A (en) Diabetic retinopathy eye-ground photography standard picture generation method
CN107341523A (en) Express delivery list information identifying method and system based on deep learning
CN104881865A (en) Forest disease and pest monitoring and early warning method and system based on unmanned plane image analysis
CN110569747A (en) method for rapidly counting rice ears of paddy field rice by using image pyramid and fast-RCNN
CN103413120A (en) Tracking method based on integral and partial recognition of object
WO2020125057A1 (en) Livestock quantity identification method and apparatus
CN104166983A (en) Motion object real time extraction method of Vibe improvement algorithm based on combination of graph cut
Gao et al. A method for accurately segmenting images of medicinal plant leaves with complex backgrounds
CN105139383A (en) Definition circle HSV color space based medical image segmentation method and cancer cell identification method
CN110223349A (en) A kind of picking independent positioning method
CN108073918A (en) The vascular arteriovenous crossing compression feature extracting method of eye ground
CN106845513A (en) Staff detector and method based on condition random forest
CN109409377A (en) The detection method and device of text in image
CN110532399A (en) Knowledge mapping update method, system and the device of object game question answering system
CN108805210A (en) A kind of shell hole recognition methods based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant