CN107527351A - A kind of fusion FCN and Threshold segmentation milking sow image partition method - Google Patents

A kind of fusion FCN and Threshold segmentation milking sow image partition method Download PDF

Info

Publication number
CN107527351A
CN107527351A CN201710772176.2A CN201710772176A CN107527351A CN 107527351 A CN107527351 A CN 107527351A CN 201710772176 A CN201710772176 A CN 201710772176A CN 107527351 A CN107527351 A CN 107527351A
Authority
CN
China
Prior art keywords
sow
fcn
segmentation
image
threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710772176.2A
Other languages
Chinese (zh)
Other versions
CN107527351B (en
Inventor
薛月菊
杨阿庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN201710772176.2A priority Critical patent/CN107527351B/en
Publication of CN107527351A publication Critical patent/CN107527351A/en
Application granted granted Critical
Publication of CN107527351B publication Critical patent/CN107527351B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of fusion FCN and the milking sow image partition method of Threshold segmentation.The video image of sow is gathered, and establishes sow segmented video image storehouse;FCN sow parted patterns are established, test image is split with the model, obtain FCN sow image segmentation results;To the external minimum area rectangle frame of FCN segmentation results, and gray-scale map and H components progress Otsu Threshold segmentations to the region, obtain Threshold segmentation result;FCN segmentation results and Threshold segmentation result are merged, obtain the final segmentation result of sow image.The present invention merges multichannel Otsu Threshold sementations on the basis of FCN, and regional area missing can be effectively filled up while FCN segmentation effects are not reduced, and improves segmentation accuracy rate.

Description

A kind of fusion FCN and Threshold segmentation milking sow image partition method
Technical field
The present invention relates to technical field of image segmentation, and full convolution net (FCN) network and Duo Tong are merged more particularly, to one kind The milking sow image partition method of road Threshold segmentation.
Background technology
The health and maternal behavior of milking sow are related to the economic benefit on whole pig farm, and milking sow behavior is supervised Survey is particularly important.Traditional sow condition monitoring mode is for a long time to days such as sow motion, feed, lactations by artificial Chang Hangwei is observed, and rule of thumb judges the quality of sow health and maternal behavior, so as to further take correlation to arrange Apply.Which not only takes, effort, also easily causes erroneous judgement.It is generation to monitor sow behavior automatically using computer vision technique For the preferable selection of manual type.
The first step of milking sow behavior automatic identification is sought to using computer vision technique by sow from complexity Completely split in background image, i.e., sow image is split.The segmentation of sow image is a complexity and challenging Problem, its Major Difficulties have two aspects, are on the one hand due to caused by the change of sow body:Sow has considerably complicated Variations in detail, such as different behavior posture such as sittings, lie on one's side, body twist;On the other hand it is caused by external environmental factor 's:The change of the complexity of pig house environment, such as light, pig are blocked, environmental background and pig color contrast are little.These problems Presence, to sow image segmentation bring very big challenge.
In recent years, there are some researchers using computer vision technique extraction pig target prospect.Nanjing in 2014 Agriculture university Zhu Wei is emerging et al. using mixed Gaussian renewal background model, and combines maximum information entropy threshold, but this method is uncomfortable Close target prospect that is motionless for a long time or slowly moving.The team in 2015 is carried out using maximum information entropy threshold to group support piggy Secondary splitting, acquisition pig target prospect, but this method are when target prospect and background difference are little, less effective.Publication number CN106204537A patent discloses live pig image partition method under a kind of complex scene, and this method to live pig image by entering Row background difference, Threshold segmentation recycle its barycenter and image light source information to carry out shadow compensation after obtaining primary segmentation image, So as to obtain the image after live pig segmentation.Hua Zhong Agriculture University's high cloud in 2017 et al. is to the adhesion pig in swinery, in threshold value On the basis of segmentation, using the fractional spins based on range conversion, adhesion pig individual is partitioned into.However, majority is ground Object in studying carefully seldom is related to the uneven sow in lactation of pattern both for the live pig of pure color, replacement gilt or piggy.Separately The pig Objective extraction based on convolutional network is also not directed in outer above-mentioned pig dividing method.
Convolutional neural networks technology is widely applied in field of industrial production.2015, Long et al. was proposed The image, semantic partitioning algorithm of full convolutional network (FCN, Fully Convolutional Networks), by existing god It is improved through network model, obtained end-to-end Pixel-level prediction, reduces the complexity of training, and can accurately carry Take the Deep Semantics information in image.FCN by the operations such as multilayer convolution, pond preferably avoid uneven illumination it is even, with The problems such as machine noise, scalloping, very big breakthrough is obtained in image segmentation field, but this method is in livestock and poultry cultivation object Application study in segmentation is nearly at blank.Because FCN is up-sampled using simple bilinear interpolation, figure is easily lost As local message, cavity is produced.In addition, in the case where sample deficiency or sample are single, FCN shows poor extensive Ability, easily produce less divided phenomenon.Multichannel threshold segmentation method utilizes the characteristics of object itself similitude, extracting object mesh Mark, FCN is made up to a certain extent and produces cavity, less divided phenomenon.The present invention extracts ROI areas on the basis of FCN segmentations Domain, fusion multichannel threshold segmentation method can effectively fill up partial zones while FCN segmentation effects are not reduced in the region Domain lacks, and improves FCN segmentation effects, strengthens Generalization Capability.
The content of the invention
It is an object of the invention to overcome the technical problem that above-mentioned background technology proposes, there is provided one kind fusion FCN and Duo Tong The milking sow image partition method of road Threshold segmentation, can sow deformation, block, pattern is uneven and is contrasted with background color Degree is accurately partitioned into sow individual less and under the complex situations such as light changes.
To achieve these goals, technical scheme is as follows:
A kind of fusion FCN and multichannel Threshold segmentation milking sow image partition method, comprises the following steps:
S1, the video image for gathering sow, and establish sow segmented video image storehouse;
S2, FCN sow parted patterns are established, test image is split with the model, obtain the segmentation of FCN sows image As a result;
S3, to the external minimum area rectangle frame of FCN segmentation results, and gray-scale map to the region and H components carry out Otsu Threshold segmentation, so as to obtain Threshold segmentation result;
S4, FCN segmentation results and Threshold segmentation result merged, obtain the final segmentation result of sow image.
FCN is preferably solved under pig house scene, because illumination becomes using convolution, pondization extraction high quality graphic feature The problems such as change, sow body surface irregular colour, sow and piglet adhesion, in addition in the case where light is strong, and FCN exists in itself Detailed information is easily lost in convolution process, causes sow regional area to lack.For this method on the basis of FCN, fusion is more logical Road Otsu Threshold sementations, regional area missing can be effectively filled up while FCN segmentation effects are not reduced, it is accurate to improve segmentation True rate.
Preferably, the detailed process of the step S1 is as follows:
S11, data acquisition:Collection in real time, which obtains, overlooks sow video image;
S12, structure database:Sow body missing more than 1/2, the frame of video of motion blur are rejected, structure training set, is tested Card collection and test set;
S13, by hand mark sow image:Mark out all pixels point of sow in the picture by hand.
Preferably, the detailed process of the step S2 is as follows:
S21, design FCN sow parted pattern structures;
S211, selection convolutional neural networks;
S212, the classification layer for removing convolutional neural networks;
S213, design and full articulamentum input data comparable size convolution kernel, and do convolution fortune with this layer of input data Calculate, all full articulamentums of convolutional neural networks are converted into convolutional layer by this way;
S214, increase convolutional layer, carry out 1 × 1 convolution algorithm, output dimension is classification number, is obtained to top pond layer n To pond layer n prediction result score (n), deconvolution is carried out to the result, obtains pond layer n deconvolution prediction knot Fruit score_up (n);
S215, the last layer pond layer n-1 to pond layer n carry out 1 × 1 convolution algorithm, and output dimension is classification number, is obtained To pond layer n-1 prediction result score (n-1);
S216, above-mentioned two result score (n-1) is added with score_up (n), and carries out deconvolution, obtain the pond Change layer n-1 deconvolution prediction result score_up (n-1);
S217, the last layer pond layer n-2 to pond layer n-1 carry out 1 × 1 convolution algorithm, and output dimension is classification number, Obtain pond layer n-2 prediction result score (n-2);
S218, above-mentioned two result score (n-2) is added with score_up (n-1), and carries out deconvolution, be somebody's turn to do Pond layer n-2 deconvolution prediction result score_up (n-2);
S219, finally increase a Loss layer for being used for carrying out costing bio disturbance;
S22, training FCN sow parted patterns;
S221, histogram equalization is carried out to training set;
S222, parted pattern is trained on training set, with the classification convolutional Neural net trained on ImageNet Network model is pre-training model, and convergence rate can be accelerated and prevent over-fitting by finely tuning sow segmentation network by this way;It is first During secondary propagated forward, if identical with certain layer in training pattern and segmentation network structure of title, pre-training mould is directly invoked The layer parameter of type, on the contrary the layer parameter is initialized using random Gaussian distribution;Data dissemination is to last one layer, according to sow Real marking result counting loss, and stochastic gradient descent method is used, network parameter is continued to optimize, prison has been carried out to sow image Educational inspector practises, to obtain the connection weight and bias of optimal full convolutional network;
S23, using FCN sows parted pattern test set image is split;
S231, histogram equalization is carried out to test image;
S232, using the FCN models trained pretreated test image is split, extraction sow region;
S233, the segmentation result to FCN post-process, and by morphology and area threshold filling cavity and remove facet Product region.
Preferably, the detailed process of the step S3 is as follows:
S31:To the FCN external small area rectangle frame of sow segmentation result, and extract the rectangle frame area where original image Domain;
S32:The rectangle frame area image of extraction is transformed into gray space and HSV space respectively, by not shared the same light to M width Line, the sow image on different columns are counted, and the average gray value of piggy are obtained as gray threshold, to exclude piggy area Domain;The hue threshold of H components is calculated using Otsu methods on H components;Sow pixel, wherein S are extracted according to below equationTH (i, j) represents the binary map after segmentation, and H (i, j) represents H components, and G (i, j) represents gray level image, thhIt is hue threshold, thgIt is Gray threshold;
If the gray value that the tone value of H components is more than on hue threshold and gray level image I is more than gray threshold, should Pixel mark 1, otherwise labeled as 0, Threshold segmentation result is obtained, and post-processed.
Preferably, the step S3 fusions are to merge FCN segmentation result and Threshold segmentation according to below equation, wherein SFCN(i, j) is FCN segmentation results, STH(i,j)It is multichannel Threshold segmentation result, is post-processed after fusion, obtains final point Cut result I (i, j);
Preferably, the deconvolution refers to up-sample output data, and up-sampling is realized by bilinear interpolation 's.
Preferably, the step S2 prediction results are a secondary X-Y schemes, and the value of each coordinate points position represents each class Other probability.
Preferably, the training set refers to the data set for training parted pattern;The checking collection refers to training Cheng Zhong, for optimizing the data set of network architecture parameters, select optimal network model;The test set refers to for test model Performance, and carry out performance evaluation.
Compared with prior art, the beneficial effect of technical solution of the present invention is:
(1) present invention establishes a milking sow video image storehouse, and the database is contained under pig house scene, and sow is daily All each variant, the data of various actions posture overhead view image in life, the light of all images, background, yardstick etc. Storehouse is that later sow behavioural analysis, algorithm design etc. provide data support;
(2) present invention is based on convolutional network, parted pattern is trained using part sow video image, using remainder Sow image collects as checking, improves the Generalization Capability of network, solves under complex environment, such as light, piggy block, sow The problem of pattern is presented and with environmental colors difference less etc. under factor in body surface, and sow segmentation is difficult;
(3) present invention is repaiied by merging multichannel Threshold segmentation on the basis of splitting in FCN to FCN segmentation results Mend, especially in the case where light is relatively strong, sow segmentation result is improved well again, further improves the degree of accuracy of segmentation And generalization ability, provide more accurately information for further sow behavioural analysis;
(4) invention is applied to monitor continuous prolonged sow, helps further to carry out sow behavior automation Detection identification.
Brief description of the drawings
Fig. 1 is flow chart of the method for the present invention.
Fig. 2 is FCN network structures of the present invention.
Fig. 3 (a) is the sow video image of collection;Fig. 3 (b) is pretreated image;Fig. 3 (c) is FCN segmentation knots Fruit;Fig. 3 (d) is the extraneous rectangle frame on FCN segmentation result;Fig. 3 (e) is the rectangle frame region where gray level image;Fig. 3 (f) it is rectangle frame region where H components;Fig. 3 (g) is the segmentation result of gray-scale map and H components;Fig. 3 (h) is the full convolution of fusion Segmentation binary map after network and multichannel Threshold segmentation;Fig. 3 (i) is the sow volumetric region of extraction.
Fig. 4 is segmentation result contrast.
Embodiment
Accompanying drawing being given for example only property explanation, it is impossible to be interpreted as the limitation to this patent;It is attached in order to more preferably illustrate the present embodiment Scheme some parts to have omission, zoom in or out, do not represent the size of actual product;
To those skilled in the art, it is to be appreciated that some known features and its explanation, which may be omitted, in accompanying drawing 's.Technical scheme is described further with reference to the accompanying drawings and examples.
The invention provides a kind of fusion FCN and the milking sow image partition method of multichannel Threshold segmentation, this method The sow object extraction under pig house scene is realized, basic guarantee is provided for further processing and maternal behavior intellectual analysis.
Fig. 1 part 1s are Database, including data acquisition, experimental data are screened and data set mark, is follow-up real Test and data support is provided.Part 2 is to design FCN sow Image Segmentation Model, trains optimum segmentation on training set first Model, then test set image is split using the model.On the basis of third portion segmentation at the beginning of the FCN, to H components and Gray level image carries out adaptive threshold fuzziness, obtains segmentation result.4th part is the fusion to first two methods and resultses, is obtained Final segmentation result.This method is under Ubuntu14.04 operating systems, on the GPU hardware platform based on Nvidia GTX 980 Caffe deep learning frameworks are built, carry out FCN model trainings and the test of sow image segmentation, and completion is more under Matlab Passage Threshold segmentation and fusion.
It is implemented as:
Step 1: video image acquisition, Database;
Step 2: designing and training FCN sow Image Segmentation Models, image foremost segment is carried out with the model;
Step 3: sow image is split using multichannel threshold segmentation method;
Step 4: being merged to the segmentation result of step 2 and step 3 kind, final segmentation result is obtained;
The database building method of the step 1 specifically includes:
1) video camera is fixed to directly over swinery on the spot, adjusted to suitable height, to obtain complete swinery region.With USB is connected to computerized image and obtains system, obtains overlook sow video image in real time, and preserves to local hard drive, gathers 28 altogether Column sow overlooks color video frequency image, and image size is 960 × 540 pixels, the sow video image gathered as shown in Fig. 3 (a).
2) training set, checking collection and test set are established.Extracted respectively from 7 column sow video images different postures (stand, Sitting, lie on one's side), the width of sow video image 3811 of different time (from 8 points of morning to 6 pm) as training sample, adopt The same manner is taken to collect from the remaining width of 21 column sow video image extraction 672 as checking, from 28 column sow videos of all shootings 1366 width images are extracted in image as test set, test set does not occur the image repeated with checking collection and training set.To training Collection, checking collection and test set carry out manual mark, mark out all pixels point of sow in the picture by hand.Wherein background area Pixel value is 0, and sow area pixel value is 1, and sow edge pixel values are 255, are represented in marginal portion without costing bio disturbance.
The step 2, FCN sow Image Segmentation Model is built, carry out just segmentation with the model, specifically include:
1) network structure based on selection VGG16, and the network is modified, realize that Pixel-level is pre- to adapt to FCN Survey task, FCN of the invention segmentation network are as shown in Figure 2.Concrete operations are as follows:
(a) VGG16 classification layers are removed;
(b) it is respectively 7 × 7 and 1 × 1 with convolution kernel by VGG16 full articulamentum F6 and F7, dimension is 4096 convolutional layer Instead of, C6 and C7 in the convolutional layer such as Fig. 2 after conversion, and increase convolutional layer C8, convolution kernel is 1 × 1, and order output dimension is 2 (female Pig object and background), output result is score (5), and each pixel value is to should pixel generic probability;
(c) warp lamination D1 is increased behind C8, the convolution kernel of deconvolution is 2 × 2, that is, carries out 2 times of up-sampling, obtain The deconvolution prediction result score_up (5) of the C8;
(d) increase to convolutional layer C9, convolution kernel is 1 × 1, and order output dimension is 2, and convolution algorithm is carried out with pond layer P4, defeated It is score (4) to go out result;
(e) increase fused layer F1 to merge D1 and C9 output result, by above-mentioned two result score (4) with Score_up (5) is added;
(f) increase warp lamination D2,2 times of up-samplings are carried out to F2 output result, output result is score_up (4);
(g) convolutional layer C10 is increased, convolution kernel is 1 × 1, and output dimension is 2, and convolution algorithm, output are carried out with pond layer P3 As a result it is score (3);
(h) above-mentioned two result score (3) is added by increase fused layer F2 with score_up (4);
(i) warp lamination D3 is increased, convolution kernel is 8 × 8, i.e., carries out 8 times of up-samplings, output result to F2 fusion results For score_up (3), the image of the equal size of image is finally cut into and is entered out, as predicts segmentation result;
(j) a Loss layer is increased, for doing contrast with manual markings image to carry out costing bio disturbance.
2) histogram equalization is carried out to training set, reduces the influence that light inequality is brought.
3) pretreated training set is sent into segmentation network, learns optimal network parameter.The specific implementation of training process It is as follows:
Using VGG16.caffemodel as pre-training model, the model is the mould trained on ImageNet databases Type, first during propagated forward, the layer do not changed, the parameter of pre-training model is directly invoked, otherwise using random Gaussian point Cloth initializes the layer parameter.Data dissemination is to last one layer, according to the real marking result counting loss of sow, according to following public affairs Formula counting loss, c represent the quadratic sum of cost loss, and m represents to be sent into the sample size of network every time, in the present embodiment, by In the limitation of video memory, m is arranged to 1, tiRepresent correctly to classify (mark image) corresponding to i-th of image, ziRepresent to transport by network The testing result of i-th of image is exported after calculation.
After having calculated loss, then lost according to error and carry out backpropagation, optimize network parameter, during backpropagation, to volume Product core W and biasing b are adjusted according to equation below, wherein η1, η2For learning rate, Wold, WnewRepresent before updating and after renewal Weighting parameter.
Make η1=10-12, η1=2 × 10-12, often by an iteration, first-order error loss is calculated, and carry out parameter renewal. When since the error of checking collection become larger being gradually reduced, then it is assumed that whole network has begun to over-fitting, terminates training, and Preservation model, it is believed that the model is optimal models, now iteration 30,000 times altogether, last 5 hours 37 points 53 seconds, cost function convergence To 0.02;
4) histogram equalization is carried out to test set, shown in pretreated result such as Fig. 3 (b).
5) pretreated training set is sent into the FCN models trained to split sow image, cutting procedure is only A propagated forward is carried out, without consequent feedback.Then 5 × 5 morphology operations structural elements are used to prediction result " disk " carries out a closed operation, then removes the connected domain that area is less than area threshold 30000, obtains FCN final segmentation knot Fruit SFCN, shown in segmentation result such as Fig. 3 (c).
The specific implementation that the sow image using multichannel threshold value in step 3 is split includes:
1) to FCN final segmentation figure as SFCN, extraneous minimum area rectangle frame, such as Fig. 3 (d), and extract original image institute Rectangle frame region;
2) the rectangle frame area image of extraction is transformed into gray space and HSV space respectively, collects from checking and choose 200 width Different light, the sow image on different columns, the average gray 43 of piggy are counted as gray threshold thg;Pass through Otsu method meters Calculate the hue threshold th of H componentsh, object pixel is extracted according to below equation, if the tone value of H components (Fig. 3 (f)) is more than color Adjust threshold value thhAnd the gray value on gray level image G (Fig. 3 (e)) is more than gray threshold thgPixel mark 1, rest of pixels point Labeled as 0, Threshold segmentation primary segmentation result S is obtainedTH, such as Fig. 3 (g);
Being implemented as follows for FCN segmentation results and Threshold segmentation result is merged in step 4:
1) according to below equation, by SFCNAnd STHIn all target pixel points mark and be, other pixels are labeled as 0. Cavity and noise be present in fusion results I.
2) to fusion results using 5 × 5 morphology operations structural elements " disk " carry out a closed operation, filling cavity, The connected domain that area is less than area threshold 30000 is removed again, obtains final segmentation figure as Bw, as shown in Fig. 3 (h), Fig. 3 (i) is The RGB sow object results of extraction.
The following detailed description of the experimental result of this experiment:
4 evaluation indexes that the present invention is generally acknowledged using industry count to the segmentation result of test set, test sow figure As the performance of dividing method is implemented as follows:
Pixel accuracy rate (pixel acc), classification Average Accuracy (mean acc) are calculated using below equation, are averaged Area coincidence degree (mean IU, Intersection over union) and frequency weighting area coincidence degree (f.w IU), wherein nijExpression belongs to the pixel number that i classes are judged to j classes, nclRepresent semantic classes total number, ti=∑jnijRepresent i class pixels Point total number.
Pixel acc=∑sinii/∑iti
Mean acc=(1/ncl)∑inii/ti
Mean IU=(1/ncl)∑inii/(ti+∑jnji-nii)
F.w IU=(∑sKtk)-1itinii/(ti+∑jnji-nii)
The segmentation effect of test set (1366 28 column sow images) is calculated using above-mentioned formula, statistical result is such as Shown in table 1.In table 1, FCN methods are Long et al. the improved full convolutional network of great-jump-forward (FCN-8s), thresholds on VGG16 networks Value segmentation is the boundary rectangle frame in the result of FCN segmentations, the threshold value carried out in rectangle frame region to gray level image and H components Segmentation, fusion refer to a kind of dividing method for merging the above two segmentation results that the present invention puts forward.
FCN, Threshold segmentation and fusion proposed by the present invention even both comparing results as shown in figure 4, wherein Threshold segmentation Multichannel Threshold segmentation is carried out in the ROI region for referring to extract on FCN segmentation results, compared to point directly in entire image Result is cut, effect promoting is a lot.5 groups of images are shared, respectively from 5 different columns and are different from swinery selected by training set.From Fig. 4 As can be seen that under the targeted pig house scene of the present invention, FCN and Threshold segmentation have complementation, and with reference to after both, effect has It is obviously improved, even if there is the situation of Fig. 4 (e) partial segmentations, it is also to have carried that the segmentation effect of method of the invention, which compares FCN, Rise.
The segmentation result of table 1
Obviously, the above embodiment of the present invention is only intended to clearly illustrate example of the present invention, and is not pair The restriction of embodiments of the present invention.For those of ordinary skill in the field, may be used also on the basis of the above description To make other changes in different forms.There is no necessity and possibility to exhaust all the enbodiments.It is all this All any modification, equivalent and improvement made within the spirit and principle of invention etc., should be included in the claims in the present invention Protection domain within.

Claims (8)

1. a kind of fusion FCN and multichannel Threshold segmentation milking sow image partition method, it is characterised in that including following step Suddenly:
S1, the video image for gathering sow, and establish sow segmented video image storehouse;
S2, FCN sow parted patterns are established, test image is split with the model, obtain FCN sows image segmentation knot Fruit;
S3, to the external minimum area rectangle frame of FCN segmentation results, and gray-scale map to the region and H components carry out Otsu threshold values Segmentation, so as to obtain Threshold segmentation result;
S4, FCN segmentation results and Threshold segmentation result merged, obtain the final segmentation result of sow image.
2. a kind of fusion FCN according to claim 1 and multichannel Threshold segmentation sow image partition method, its feature It is, the detailed process of the step S1 is as follows:
S11, data acquisition:Collection in real time, which obtains, overlooks sow video image;
S12, structure database:Reject sow body missing more than 1/2, the frame of video of motion blur, structure training set, checking collection And test set;
S13, by hand mark sow image:Mark out all pixels point of sow in the picture by hand.
3. a kind of fusion FCN according to claim 1 or 2 and multichannel Threshold segmentation sow image partition method, its It is characterised by, the detailed process of the step S2 is as follows:
S21, design FCN sow parted pattern structures;
S211, selection convolutional neural networks;
S212, the classification layer for removing convolutional neural networks;
S213, design and full articulamentum input data comparable size convolution kernel, and convolution algorithm is done with this layer of input data, lead to Cross which and all full articulamentums of convolutional neural networks are converted into convolutional layer;
S214, increase convolutional layer, carry out 1 × 1 convolution algorithm, output dimension is classification number, is somebody's turn to do to top pond layer n Pond layer n prediction result score (n), deconvolution is carried out to the result, obtains pond layer n deconvolution prediction result score_up(n);
S215, the last layer pond layer n-1 to pond layer n carry out 1 × 1 convolution algorithm, and output dimension is classification number, is somebody's turn to do Pond layer n-1 prediction result score (n-1);
S216, above-mentioned two result score (n-1) is added with score_up (n), and carries out deconvolution, obtain the pond layer N-1 deconvolution prediction result score_up (n-1);
S217, the last layer pond layer n-2 to pond layer n-1 carry out 1 × 1 convolution algorithm, and output dimension is classification number, is obtained Pond layer n-2 prediction result score (n-2);
S218, above-mentioned two result score (n-2) is added with score_up (n-1), and carries out deconvolution, obtain the pond Layer n-2 deconvolution prediction result score_up (n-2);
S219, finally increase a Loss layer for being used for carrying out costing bio disturbance;
S22, training FCN sow parted patterns;
S221, histogram equalization is carried out to training set;
S222, parted pattern is trained on training set, using the classification convolutional neural networks model that is trained on ImageNet as Pre-training model, convergence rate can be accelerated and prevent over-fitting by finely tuning sow segmentation network by this way;Forward direction passes first Sowing time, if identical with certain layer in training pattern and segmentation network structure of title, directly invoke the layer of pre-training model Parameter, on the contrary the layer parameter is initialized using random Gaussian distribution;Data dissemination is to last one layer, according to the real marking of sow As a result counting loss, and stochastic gradient descent method is used, network parameter is continued to optimize, supervised learning is carried out to sow image, To obtain the connection weight and bias of optimal full convolutional network;
S23, using FCN sows parted pattern test set image is split;
S231, histogram equalization is carried out to test image;
S232, using the FCN models trained pretreated test image is split, extraction sow region;
S233, the segmentation result to FCN post-process, and by morphology and area threshold filling cavity and remove small surfaces Domain.
4. a kind of fusion FCN according to claim 3 and multichannel Threshold segmentation sow image partition method, its feature It is, the detailed process of the step S3 is as follows:
S31:To the FCN external small area rectangle frame of sow segmentation result, and extract the rectangle frame region where original image;
S32:The rectangle frame area image of extraction is transformed into gray space and HSV space respectively, by M width difference light, The sow image on different columns is counted, and the average gray value of piggy is obtained as gray threshold, to exclude piggy region; The hue threshold of H components is calculated on H components using Otsu methods;Sow pixel, wherein S are extracted according to below equationTH(i, j) table Show the binary map after segmentation, H (i, j) represents H components, and G (i, j) represents gray level image, thhIt is hue threshold, thgIt is gray scale threshold Value;
If the gray value that the tone value of H components is more than on hue threshold and gray level image I is more than gray threshold, by the pixel Point mark 1, otherwise labeled as 0, obtains Threshold segmentation result, and post-processed.
5. a kind of fusion FCN according to claim 4 and multichannel Threshold segmentation sow image partition method, its feature It is, the step S3 fusions are to merge FCN segmentation result and Threshold segmentation according to below equation, wherein SFCN(i, j) is FCN segmentation results, STH(i,j)Be multichannel Threshold segmentation result, post-processed after fusion, obtain final segmentation result I (i, j);
6. a kind of fusion FCN according to claim 3 and multichannel Threshold segmentation milking sow image partition method, its It is characterised by, the deconvolution refers to up-sample output data, and up-sampling is realized by bilinear interpolation.
7. a kind of fusion FCN according to claim 3 and multichannel Threshold segmentation milking sow image partition method, its It is characterised by, the prediction result is a secondary X-Y scheme, and the value of each coordinate points position represents the probability of each classification.
8. a kind of fusion FCN according to claim 2 and multichannel Threshold segmentation milking sow image partition method, its It is characterised by, the training set refers to the data set for training parted pattern;The checking collection refers in the training process, use To optimize the data set of network architecture parameters, optimal network model is selected;The test set refers to for test model performance, and Carry out performance evaluation.
CN201710772176.2A 2017-08-31 2017-08-31 Lactating sow image segmentation method fusing FCN and threshold segmentation Active CN107527351B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710772176.2A CN107527351B (en) 2017-08-31 2017-08-31 Lactating sow image segmentation method fusing FCN and threshold segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710772176.2A CN107527351B (en) 2017-08-31 2017-08-31 Lactating sow image segmentation method fusing FCN and threshold segmentation

Publications (2)

Publication Number Publication Date
CN107527351A true CN107527351A (en) 2017-12-29
CN107527351B CN107527351B (en) 2020-12-29

Family

ID=60683020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710772176.2A Active CN107527351B (en) 2017-08-31 2017-08-31 Lactating sow image segmentation method fusing FCN and threshold segmentation

Country Status (1)

Country Link
CN (1) CN107527351B (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198192A (en) * 2018-01-15 2018-06-22 任俊芬 A kind of quick human body segmentation's method of high-precision based on deep learning
CN108288388A (en) * 2018-01-30 2018-07-17 深圳源广安智能科技有限公司 A kind of intelligent traffic monitoring system
CN108388877A (en) * 2018-03-14 2018-08-10 广州影子控股股份有限公司 The recognition methods of one boar face
CN108734211A (en) * 2018-05-17 2018-11-02 腾讯科技(深圳)有限公司 The method and apparatus of image procossing
CN108830854A (en) * 2018-03-22 2018-11-16 广州多维魔镜高新科技有限公司 A kind of image partition method and storage medium
CN108830144A (en) * 2018-05-03 2018-11-16 华南农业大学 A kind of milking sow gesture recognition method based on improvement Faster-R-CNN
CN108921938A (en) * 2018-06-28 2018-11-30 西安交通大学 Hierarchical structure construction method in 3D scene based on maximal flows at lowest cost
CN109145769A (en) * 2018-08-01 2019-01-04 辽宁工业大学 The target detection network design method of blending image segmentation feature
CN109492535A (en) * 2018-10-12 2019-03-19 华南农业大学 A kind of sow Breast feeding behaviour recognition methods of computer vision
CN109754440A (en) * 2018-12-24 2019-05-14 西北工业大学 A kind of shadow region detection method based on full convolutional network and average drifting
CN109766856A (en) * 2019-01-16 2019-05-17 华南农业大学 A kind of method of double fluid RGB-D Faster R-CNN identification milking sow posture
CN109785337A (en) * 2018-12-25 2019-05-21 哈尔滨工程大学 Mammal counting method in a kind of column of Case-based Reasoning partitioning algorithm
CN109815991A (en) * 2018-12-29 2019-05-28 北京城市网邻信息技术有限公司 Training method, device, electronic equipment and the storage medium of machine learning model
CN109886271A (en) * 2019-01-22 2019-06-14 浙江大学 It merges deep learning network and improves the image Accurate Segmentation method of edge detection
CN109886985A (en) * 2019-01-22 2019-06-14 浙江大学 Merge the image Accurate Segmentation method of deep learning network and watershed algorithm
CN110222664A (en) * 2019-06-13 2019-09-10 河南牧业经济学院 A kind of feeding monitoring system of intelligent pigsty based on the analysis of video activity
CN110544258A (en) * 2019-08-30 2019-12-06 北京海益同展信息科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN110889418A (en) * 2019-11-04 2020-03-17 数量级(上海)信息技术有限公司 Gas contour identification method
CN110992301A (en) * 2019-10-14 2020-04-10 数量级(上海)信息技术有限公司 Gas contour identification method
CN111161292A (en) * 2019-11-21 2020-05-15 合肥合工安驰智能科技有限公司 Ore size measurement method and application system
CN111489387A (en) * 2020-04-09 2020-08-04 湖南盛鼎科技发展有限责任公司 Remote sensing image building area calculation method
CN111915636A (en) * 2020-07-03 2020-11-10 闽江学院 Method and device for positioning and dividing waste target
CN111932447A (en) * 2020-08-04 2020-11-13 中国建设银行股份有限公司 Picture processing method, device, equipment and storage medium
CN112085696A (en) * 2020-07-24 2020-12-15 中国科学院深圳先进技术研究院 Training method and segmentation method of medical image segmentation network model and related equipment
CN112508910A (en) * 2020-12-02 2021-03-16 创新奇智(深圳)技术有限公司 Defect extraction method and device for multi-classification defect detection
CN113516084A (en) * 2021-07-20 2021-10-19 海南长光卫星信息技术有限公司 High-resolution remote sensing image semi-supervised classification method, device, equipment and medium
CN113947617A (en) * 2021-10-19 2022-01-18 华南农业大学 Suckling piglet multi-target tracking method based on long and short memory
CN111709333B (en) * 2020-06-04 2022-05-20 南京农业大学 Tracing early warning system based on abnormal excrement of cage-raised chickens and health monitoring method
CN117436452A (en) * 2023-12-15 2024-01-23 西南石油大学 Financial entity identification method integrating context awareness and multi-level features

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920243A (en) * 2017-03-09 2017-07-04 桂林电子科技大学 The ceramic material part method for sequence image segmentation of improved full convolutional neural networks
CN107016415A (en) * 2017-04-12 2017-08-04 合肥工业大学 A kind of coloured image Color Semantic sorting technique based on full convolutional network
CN107025440A (en) * 2017-03-27 2017-08-08 北京航空航天大学 A kind of remote sensing images method for extracting roads based on new convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920243A (en) * 2017-03-09 2017-07-04 桂林电子科技大学 The ceramic material part method for sequence image segmentation of improved full convolutional neural networks
CN107025440A (en) * 2017-03-27 2017-08-08 北京航空航天大学 A kind of remote sensing images method for extracting roads based on new convolutional neural networks
CN107016415A (en) * 2017-04-12 2017-08-04 合肥工业大学 A kind of coloured image Color Semantic sorting technique based on full convolutional network

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198192A (en) * 2018-01-15 2018-06-22 任俊芬 A kind of quick human body segmentation's method of high-precision based on deep learning
CN108288388A (en) * 2018-01-30 2018-07-17 深圳源广安智能科技有限公司 A kind of intelligent traffic monitoring system
CN108388877A (en) * 2018-03-14 2018-08-10 广州影子控股股份有限公司 The recognition methods of one boar face
CN108830854A (en) * 2018-03-22 2018-11-16 广州多维魔镜高新科技有限公司 A kind of image partition method and storage medium
CN108830144B (en) * 2018-05-03 2022-02-22 华南农业大学 Lactating sow posture identification method based on improved Faster-R-CNN
CN108830144A (en) * 2018-05-03 2018-11-16 华南农业大学 A kind of milking sow gesture recognition method based on improvement Faster-R-CNN
CN108734211A (en) * 2018-05-17 2018-11-02 腾讯科技(深圳)有限公司 The method and apparatus of image procossing
CN108734211B (en) * 2018-05-17 2019-12-24 腾讯科技(深圳)有限公司 Image processing method and device
US11373305B2 (en) 2018-05-17 2022-06-28 Tencent Technology (Shenzhen) Company Limited Image processing method and device, computer apparatus, and storage medium
CN108921938A (en) * 2018-06-28 2018-11-30 西安交通大学 Hierarchical structure construction method in 3D scene based on maximal flows at lowest cost
CN108921938B (en) * 2018-06-28 2020-06-19 西安交通大学 Hierarchical structure construction method based on minimum cost and maximum flow in 3D scene
CN109145769A (en) * 2018-08-01 2019-01-04 辽宁工业大学 The target detection network design method of blending image segmentation feature
CN109492535A (en) * 2018-10-12 2019-03-19 华南农业大学 A kind of sow Breast feeding behaviour recognition methods of computer vision
CN109492535B (en) * 2018-10-12 2021-09-24 华南农业大学 Computer vision sow lactation behavior identification method
CN109754440A (en) * 2018-12-24 2019-05-14 西北工业大学 A kind of shadow region detection method based on full convolutional network and average drifting
CN109785337A (en) * 2018-12-25 2019-05-21 哈尔滨工程大学 Mammal counting method in a kind of column of Case-based Reasoning partitioning algorithm
CN109785337B (en) * 2018-12-25 2021-07-06 哈尔滨工程大学 In-column mammal counting method based on example segmentation algorithm
CN109815991A (en) * 2018-12-29 2019-05-28 北京城市网邻信息技术有限公司 Training method, device, electronic equipment and the storage medium of machine learning model
CN109815991B (en) * 2018-12-29 2021-02-19 北京城市网邻信息技术有限公司 Training method and device of machine learning model, electronic equipment and storage medium
CN109766856B (en) * 2019-01-16 2022-11-15 华南农业大学 Method for recognizing postures of lactating sows through double-current RGB-D Faster R-CNN
CN109766856A (en) * 2019-01-16 2019-05-17 华南农业大学 A kind of method of double fluid RGB-D Faster R-CNN identification milking sow posture
CN109886985B (en) * 2019-01-22 2021-02-12 浙江大学 Image accurate segmentation method fusing deep learning network and watershed algorithm
CN109886985A (en) * 2019-01-22 2019-06-14 浙江大学 Merge the image Accurate Segmentation method of deep learning network and watershed algorithm
CN109886271A (en) * 2019-01-22 2019-06-14 浙江大学 It merges deep learning network and improves the image Accurate Segmentation method of edge detection
CN110222664A (en) * 2019-06-13 2019-09-10 河南牧业经济学院 A kind of feeding monitoring system of intelligent pigsty based on the analysis of video activity
CN110222664B (en) * 2019-06-13 2021-07-02 河南牧业经济学院 Intelligent pig housing monitoring system based on video activity analysis
CN110544258A (en) * 2019-08-30 2019-12-06 北京海益同展信息科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN110544258B (en) * 2019-08-30 2021-05-25 北京海益同展信息科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN110992301A (en) * 2019-10-14 2020-04-10 数量级(上海)信息技术有限公司 Gas contour identification method
CN110889418A (en) * 2019-11-04 2020-03-17 数量级(上海)信息技术有限公司 Gas contour identification method
CN111161292A (en) * 2019-11-21 2020-05-15 合肥合工安驰智能科技有限公司 Ore size measurement method and application system
CN111161292B (en) * 2019-11-21 2023-09-05 合肥合工安驰智能科技有限公司 Ore scale measurement method and application system
CN111489387A (en) * 2020-04-09 2020-08-04 湖南盛鼎科技发展有限责任公司 Remote sensing image building area calculation method
CN111489387B (en) * 2020-04-09 2023-10-20 湖南盛鼎科技发展有限责任公司 Remote sensing image building area calculation method
CN111709333B (en) * 2020-06-04 2022-05-20 南京农业大学 Tracing early warning system based on abnormal excrement of cage-raised chickens and health monitoring method
CN111915636A (en) * 2020-07-03 2020-11-10 闽江学院 Method and device for positioning and dividing waste target
CN111915636B (en) * 2020-07-03 2023-10-24 闽江学院 Method and device for positioning and dividing waste targets
CN112085696A (en) * 2020-07-24 2020-12-15 中国科学院深圳先进技术研究院 Training method and segmentation method of medical image segmentation network model and related equipment
CN112085696B (en) * 2020-07-24 2024-02-23 中国科学院深圳先进技术研究院 Training method and segmentation method for medical image segmentation network model and related equipment
CN111932447A (en) * 2020-08-04 2020-11-13 中国建设银行股份有限公司 Picture processing method, device, equipment and storage medium
CN111932447B (en) * 2020-08-04 2024-03-22 中国建设银行股份有限公司 Picture processing method, device, equipment and storage medium
CN112508910A (en) * 2020-12-02 2021-03-16 创新奇智(深圳)技术有限公司 Defect extraction method and device for multi-classification defect detection
CN113516084A (en) * 2021-07-20 2021-10-19 海南长光卫星信息技术有限公司 High-resolution remote sensing image semi-supervised classification method, device, equipment and medium
CN113947617A (en) * 2021-10-19 2022-01-18 华南农业大学 Suckling piglet multi-target tracking method based on long and short memory
CN113947617B (en) * 2021-10-19 2024-04-16 华南农业大学 Multi-target tracking method for suckling piglets based on long and short memories
CN117436452A (en) * 2023-12-15 2024-01-23 西南石油大学 Financial entity identification method integrating context awareness and multi-level features
CN117436452B (en) * 2023-12-15 2024-02-23 西南石油大学 Financial entity identification method integrating context awareness and multi-level features

Also Published As

Publication number Publication date
CN107527351B (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN107527351A (en) A kind of fusion FCN and Threshold segmentation milking sow image partition method
Jia et al. Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot
US10984532B2 (en) Joint deep learning for land cover and land use classification
EP3614308B1 (en) Joint deep learning for land cover and land use classification
CN106778902B (en) Dairy cow individual identification method based on deep convolutional neural network
Wu et al. Detection and counting of banana bunches by integrating deep learning and classic image-processing algorithms
Mohamed et al. Msr-yolo: Method to enhance fish detection and tracking in fish farms
CN106778687B (en) Fixation point detection method based on local evaluation and global optimization
CN107977671A (en) A kind of tongue picture sorting technique based on multitask convolutional neural networks
CN105740758A (en) Internet video face recognition method based on deep learning
CN108416266A (en) A kind of video behavior method for quickly identifying extracting moving target using light stream
CN106951870A (en) The notable event intelligent detecting prewarning method of monitor video that active vision notes
CN112069985B (en) High-resolution field image rice spike detection and counting method based on deep learning
Pinto et al. Crop disease classification using texture analysis
Gan et al. Fast and accurate detection of lactating sow nursing behavior with CNN-based optical flow and features
CN116721414A (en) Medical image cell segmentation and tracking method
Sharma Rise of computer vision and internet of things
Buayai et al. Supporting table grape berry thinning with deep neural network and augmented reality technologies
CN112446417B (en) Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation
Ambashtha et al. Leaf disease detection in crops based on single-hidden layer feed-forward neural network and hierarchal temporary memory
Huang et al. A survey of deep learning-based object detection methods in crop counting
Balakrishna et al. Tomato Leaf Disease Detection Using Deep Learning: A CNN Approach
CN113449712B (en) Goat face identification method based on improved Alexnet network
Baniya et al. Current state, data requirements and generative ai solution for learning-based computer vision in horticulture
Yang The use of video to detect and measure pollen on bees entering a hive

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant