CN108090426A - A kind of group rearing pig personal identification method based on machine vision - Google Patents
A kind of group rearing pig personal identification method based on machine vision Download PDFInfo
- Publication number
- CN108090426A CN108090426A CN201711284288.XA CN201711284288A CN108090426A CN 108090426 A CN108090426 A CN 108090426A CN 201711284288 A CN201711284288 A CN 201711284288A CN 108090426 A CN108090426 A CN 108090426A
- Authority
- CN
- China
- Prior art keywords
- mrow
- pig
- msub
- mtd
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of group rearing pig personal identification methods based on machine vision.Group rearing pig is gathered first overlooks video;Then pig individual is extracted from the single-frame images of video;Then in a frequency domain, multiple dimensioned multidirectional feature of pig individual is extracted;In the spatial domain, the small minutia of pig individual is extracted;Again, Feature Dimension Reduction is carried out to the feature of extraction, and is cascaded into feature vector;Grader is finally established, is trained and identifies.Ear tag is observed and uses compared with using artificial, the identification of pig individual is carried out with untouchable using machine vision and mode identification technology, the mode of intrusive mood is not required to place electron ear tage, pig stress reaction is not easily caused, also the problems such as ear tag avoided is lost, cost of labor is reduced, technology guarantee is provided subsequently to automatically analyze livestock and poultry behavior.
Description
Technical field
The present invention relates to the technologies such as machine vision, pattern-recognition, and in particular to the monitoring of group rearing pig regards under a kind of vertical view state
In frequency, the group rearing pig personal identification method based on machine vision.
Background technology
It is to automatically analyze the important step of livestock and poultry behavior to carry out identification to animal individual.From the pig image collected
In, it is observed that pig body surface such as back, the decorative pattern in flank portion.These decorative patterns may be the different shape of muscle, skin
Caused by spot that the different lines for moving towards to be formed of skin surface hair and skin surface color change are formed etc..It is lost
The collective effect of biography factor and environmental factor, different geometries is presented in different position for they;They depend on kind, raise
The process of supporting and growth and development situation.These decorative patterns can be described by textural characteristics, and be provided to distinguish different pigs
Foundation.Texture is a kind of visual signature independent of color or brightness change, can describe neighborhood of pixels space in image
The rule of distribution.The present invention provides a kind of method using machine vision, and acquisition group rearing pig overlooks video, extracts pig body table
The method of machine vision and pattern-recognition is applied to group rearing pig identification by the textural characteristics in face.Compared with using manually sight
It examines and using ear tag, carries out the identification of pig individual with untouchable using machine vision and mode identification technology, be not required to
The mode of intrusive mood is wanted to place electron ear tage, does not easily cause pig stress reaction, the problems such as ear tag that also avoids is lost, is reduced
Cost of labor provides technology guarantee subsequently to automatically analyze livestock and poultry behavior.
The content of the invention
The purpose of the present invention is by machine vision and mode identification technology, regarded by the group rearing pig collected
Frequently, the identification of pig individual identity is realized.
The technical solution adopted in the present invention is:Group rearing pig personal identification method based on machine vision, avoids to invade
The mode for entering formula places electron ear tage on pig body, and the feature of pig individual subject need to be only gathered from video monitoring image,
With untouchable, comprise the following steps:
(1) gather group rearing pig and overlook video;
(2) pig individual is extracted from the single-frame images of video;
(3) in a frequency domain, multiple dimensioned multidirectional feature of pig individual is extracted;
(4) in the spatial domain, the minor detail feature of pig individual is extracted;
(5) Feature Dimension Reduction is carried out to the feature of extraction, and is cascaded into feature vector;
(6) grader is established, is trained and identifies.
Step (1) described content, is described in detail below:Experiment with pig house install image capturing system, to group rearing pig into
Row video monitoring.Video camera is mounted on directly over pig house, and video is overlooked in acquisition, can be collected under the vertical view state comprising background
The RGB color video of group rearing pig.
Step (2) described content, is described in detail below:Group rearing pig video sequence framing to acquisition, to each two field picture
Objective extraction is carried out, obtains single pig individual images, specific method is as follows:
1. select meet experiment condition can completely isolate all pig individual goals without adhesion picture frame;
2. and then the vertical view group rearing pig multiple target extracting method of adaptive piecemeal multi-threshold is used to obtain the prospect of single pig
Image;
3. in order to obtain the pig individual original image for being suitable for feature extraction, using the result of pig individual perspective detection as mould
Plate carries out template computing with original image frame;
4. centered on pig barycenter, intercept the square sub blocks that can include complete pig Individual Size and be normalized to phase
Same size.
Step (3) described content, is described in detail below:Pig individual images are transformed into frequency domain, pass through Gabor amplitude responses
Value description pig body surface decorative pattern, step are as follows:Pig individual images are carried out with the Gabor transformation in different scale and direction, it will
Pig individual images I () and Gabor kernel functions Ψu,v() carries out convolutional calculation, is represented using equation below:
Wherein u represents direction, and v represents scale, and * represents convolution algorithm.If input picture and u × v different directions and ruler
The bidimensional Gabor wavelet convolution of degree can obtain u × v Gabor amplitude response images.
Then, vectorial to pig individual images extraction Gabor characteristic, step is as follows:
1. the gray value of individual Gabor amplitude response image is connected into a row vector by row head and the tail;
2. then the row vector that u × v Gabor amplitude responses images are formed is cascaded successively, to merge different spaces
Frequency, spatial position and set direction;
This row vector is just as the Gabor characteristic extracted to pig individual images.
Step (4) described content, is described in detail below:The micro-structure details in pig individual images is calculated, records pig
The different rules that pixel changes in image, are as follows:
1. original image is divided into the sub-block of same size;
2. and then with sub-block center pixel IcGray value for threshold value, the pixel I with its neighborhoodkGray value compared
Compared with the process compared carries out binary conversion treatment, and the point that will be greater than threshold value puts 1, and the point less than threshold value is set to 0, and utilizes equation below:
3. result of the comparison does weighted sum according to the position difference of pixel and is treated as a binary sequence, as this
The local binary patterns encoded radio f of sub-blockl(xc,yc).Computational methods are shown below:
4. using from 0 to 1 or 1 to 0 saltus step sum is no more than situation twice as one mode, it is left situation
One mode is merged into jointly.
Then, it is as follows to pig individual images extraction local binary patterns characterization step:
1. by local binary patterns code pattern fl(xc,yc) it is divided into m sub-block { R1,R2,…,Rm};
2. and then calculate the histogram H of each sub-blocki,j, it is shown below:
3. the histogram of all sub-blocks cascades up again, the local binary patterns feature H of pig image is formed, using such as
Lower formula
H=[Hi,j], i=0,1 ..., n-1, j=0,1 ..., m-1
Wherein n represents the different patterns that local binary patterns description is formed.
Step (5) described content, is described in detail below:
1. is taken by 1 sampled point every k row, carries out space down-sampling every k rows for the filtered images of Gabor;2. then
Dimensionality reduction is carried out using principal component analytical method to the Gabor characteristic vector after down-sampling;
3. to the local binary patterns feature of extraction, dimensionality reduction is carried out using principal component analytical method;
4. Gabor characteristic and local binary patterns feature vector are cascaded, obtain to current pig individual images from frequency domain and
The feature of spatial domain extraction.
Step (6) described content, is described in detail below:The picture of each pig is all corresponding with the label manually marked, base
In support vector machines, the method for " one-to-one " is used to carry out multicategory classification.Between the pig feature of arbitrary two classes difference label
Support vector machine classifier is established, vertical k (k-1)/2 grader of building together, as the model obtained from this group of training set.
Then, pig individual identification process is described in detail below:To the feature vector extracted from a certain pig image, make
It is judged respectively with all graders established in the training process, it is just one ticket of note which pig is judging result, which belong to,
Most input pig image is classified as the class for possessing most polls at last.
The beneficial effects of the invention are as follows:
The present invention realizes the identity of pig individual under actual breeding environment, the video of group rearing pig is analyzed and handled
Identification.Since uneven illumination is even in actual pig farm environment, contrast is different everywhere, with the movement of pig, in breeding environment
The location of middle difference, the picture illumination difference and noise problem of acquisition are also inevitable.The present invention can be certain
The influence of uneven illumination and noise is overcome in degree, obtains accurate discrimination.Method based on machine vision, overcomes
Traditional artificial observation and the limitation of ear tag mode improve the time-consuming and laborious of artificial view mode and to pig individual generation
Interference, also avoids the invasive of ear tag mode, reduces cost.
Description of the drawings
Fig. 1 is the group rearing pig personal identification method flow chart the present invention is based on machine vision;
Fig. 2 is the example of video acquisition platform of the present invention and the video image frame of acquisition:(a) video acquisition platform, (b) are adopted
Certain picture frame collected;
Fig. 3 is the example that the present invention obtains single pig individual process from group rearing pig video frame:(a) certain image after framing
Frame, (b) foreground detection as a result, (c) to single pig Objective extraction as a result, (d) normalization after pig individual images (e) when
7 pig individual coloured images of previous frame extraction;
Fig. 4 is the example that the present invention extracts pig individual images multiple dimensioned Orientation Features:(a) pig individual images, (b)
The filtered amplitude responses of Gabor;
Fig. 5 is example of the present invention to pig individual images extraction minor detail feature:(a) pig individual images are calculated local
Binary pattern encodes, the histogram of (b) local binary patterns list block, during (c) local binary patterns code pattern piecemeal 9 × 9
Cascade histogram.
Specific embodiment
The present invention is described in detail with reference to the accompanying drawings and detailed description, but protection scope of the present invention is not
It is limited to this.
Fig. 1 is the flow chart of the group rearing pig personal identification method based on machine vision, with reference to the figure, is further illustrated
The each several part specific embodiment being specifically related to.
(1) gather group rearing pig and overlook video.
Specific method is as follows:As shown in Fig. 2, by reconstructing experiment pig house, the Image Acquisition of video is overlooked in installation shooting
System carries out video monitoring to group rearing pig.Video camera, which is mounted on directly over 3 meters of pig house from the ground, overlooks acquisition video, can be with
Collect the RGB color video of group rearing pig under the vertical view state comprising background.
(2) pig individual is extracted from the single-frame images of video.
Specific method is as follows:As shown in figure 3, the group rearing pig video sequence framing to acquisition, mesh is carried out to each two field picture
Mark extraction, obtains single pig individual images.
1. select meet experiment condition can completely isolate all pig individual goals without adhesion picture frame;
2. and then the vertical view group rearing pig multiple target extracting method of adaptive piecemeal multi-threshold is used to obtain the prospect of single pig
Image;
3. in order to obtain the pig individual original image for being suitable for feature extraction, using the result of pig individual perspective detection as mould
Plate carries out template computing with original image frame;
4. centered on pig barycenter, intercept the square sub blocks that can include complete pig Individual Size and be normalized to phase
Same size.
(3) in a frequency domain, multiple dimensioned multidirectional feature of pig individual is extracted.
Specific method is as follows:Pig individual images are transformed into frequency domain, pig body table is described by Gabor amplitude response values
Face decorative pattern, step are as follows:Pig individual images are carried out with the Gabor transformation in different scale and direction, by pig individual images I () with
Gabor kernel functions Ψu,v() carries out convolutional calculation, is represented using equation below:
Wherein u represents direction, and v represents scale, and * represents convolution algorithm.The decorative pattern of pig body surface by muscle difference
The common shapes such as the spot that shape, the different lines for moving towards to be formed of skin surface hair and skin surface color change are formed
Into with different scale and direction.As shown in figure 4, the bidimensional Gabor for setting input picture and 5 × 8 different directions and scale is small
Ripple convolution can obtain 5 × 8=40 Gabor characteristic images.
Then, vectorial to pig individual images extraction Gabor characteristic, step is as follows:
1. the gray value of individual Gabor amplitude response image is connected into a row vector by row head and the tail;
2. then the row vector that u × v Gabor amplitude responses images are formed is cascaded successively, to merge different spaces
Frequency, spatial position and set direction;
This row vector is just as the Gabor characteristic extracted to pig individual images.
(4) in the spatial domain, the minor detail feature of pig individual is extracted.
Specific method is as follows:As shown in Fig. 5 (a), first, the micro-structure details in pig individual images is calculated, record pig
The different rules that pixel changes in image, are as follows:
1. original image is divided into the sub-block of same size;
2. and then with sub-block center pixel IcGray value for threshold value, the pixel I with its neighborhoodkGray value compared
Compared with the process compared carries out binary conversion treatment, and the point that will be greater than threshold value puts 1, and the point less than threshold value is set to 0, and utilizes equation below:
3. result of the comparison does weighted sum according to the position difference of pixel and is treated as a binary sequence, as this
The local binary patterns encoded radio f of sub-blockl(xc,yc).Computational methods are shown below:
In formula, IcRepresent the central point of sub-block, gray value is (xc,yc), IkIt is IcThe pixel of surrounding.Will from 0 to 1 or
For the saltus step sum of person 1 to 0 no more than situation twice respectively as one mode, remaining situation merges into one mode jointly.
4. using from 0 to 1 or 1 to 0 saltus step sum is no more than situation twice as one mode, it is left situation
One mode is merged into jointly.
Then, as shown in Fig. 5 (b), (c), local binary patterns feature is extracted to pig individual images, is as follows:
1. by local binary patterns code pattern fl(xc,yc) it is divided into m sub-block { R1,R2,…,Rm};
2. and then calculate the histogram H of each sub-blocki,j, it is shown below:
In formula, flIt is local binary patterns coded image, (x, y) represents the coordinate of pixel, and it is (x, y) to coordinate that Q, which is,
Pixel whether appear in sub-block RjIt is middle judged as a result, M represent Q functions judgement content, Hi,jIt is single sub-block
Local binary patterns histogram.
3. the histogram of all sub-blocks cascades up again, the local binary patterns feature H of pig image is formed, using such as
Lower formula
H=[Hi,j], i=0,1 ..., n-1, j=0,1 ..., m-1
Wherein n represents the different patterns that local binary patterns description is formed.
(5) Feature Dimension Reduction is carried out to the feature of extraction, and is cascaded into feature vector.
Specific method is as follows:
1. is taken by 1 sampled point every k row, carries out space down-sampling every k rows for the filtered images of Gabor;
2. and then dimensionality reduction is carried out using principal component analytical method to the Gabor characteristic vector after down-sampling;
3. to the local binary patterns feature of extraction, dimensionality reduction is carried out using principal component analytical method;
4. Gabor characteristic and local binary patterns feature vector are cascaded, obtain to current pig individual images from frequency domain and
The feature of spatial domain extraction.
(6) grader is established, is trained and identifies.
Specific method is as follows:Training process is described in detail below:The picture of each pig is all corresponding with the mark manually marked
Label based on support vector machines, use the method for " one-to-one " to carry out multicategory classification.In the pig feature of arbitrary two classes difference label
Between establish support vector machine classifier.This experiment totally 7 class pig picture, vertical 7 (7-1)/2=21 grader of building together, as
The model obtained from this group of training set.
Pig individual identification process is described in detail below:To the feature vector extracted from a certain pig image, using
21 graders established in training process respectively judge it is just one ticket of note which pig is judging result, which belong to, finally
Input pig image is classified as the class for possessing most polls.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " illustrative examples ",
The description of " example ", " specific example " or " some examples " etc. means to combine specific features, the knot that the embodiment or example describe
Structure, material or feature are contained at least one embodiment of the present invention or example.In the present specification, to above-mentioned term
Schematic representation may not refer to the same embodiment or example.Moreover, specific features, structure, material or the spy of description
Point can in an appropriate manner combine in any one or more embodiments or example.
Although an embodiment of the present invention has been shown and described, it will be understood by those skilled in the art that:Not
In the case of departing from the principle of the present invention and objective a variety of change, modification, replacement and modification can be carried out to these embodiments, this
The scope of invention is limited by claim and its equivalent.
Claims (8)
1. the group rearing pig personal identification method based on machine vision, which is characterized in that comprise the following steps:
1) gather group rearing pig and overlook video;
2) pig individual is extracted from the single-frame images of video;
3) multiple dimensioned multidirectional feature of pig individual in a frequency domain, is extracted;
4) the minor detail feature of pig individual in the spatial domain, is extracted;
5) Feature Dimension Reduction is carried out to the feature of extraction, and is cascaded into feature vector;
6) grader is established, is trained and identifies.
2. the group rearing pig personal identification method based on machine vision according to claim 1, which is characterized in that bag in step 3)
It includes and pig individual images is transformed into frequency domain, pig body surface decorative pattern is described by Gabor amplitude response values, step is as follows:It is right
Pig individual images carry out the Gabor transformation in different scale and direction, by pig individual images I () and Gabor kernel functions Ψu,v
() carries out convolutional calculation, is represented using equation below:
<mrow>
<msub>
<mi>G</mi>
<mrow>
<mi>u</mi>
<mo>,</mo>
<mi>v</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>z</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>I</mi>
<mrow>
<mo>(</mo>
<mi>z</mi>
<mo>)</mo>
</mrow>
<mo>*</mo>
<msub>
<mi>&Psi;</mi>
<mrow>
<mi>u</mi>
<mo>,</mo>
<mi>v</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>z</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msub>
<mi>A</mi>
<mrow>
<mi>u</mi>
<mo>,</mo>
<mi>v</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>z</mi>
<mo>)</mo>
</mrow>
<msup>
<mi>e</mi>
<mrow>
<msub>
<mi>i&theta;</mi>
<mrow>
<mi>u</mi>
<mo>,</mo>
<mi>v</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>z</mi>
<mo>)</mo>
</mrow>
</mrow>
</msup>
</mrow>
Wherein u represents direction, and v represents scale, and * represents convolution algorithm, if input picture and u × v different directions and scale
Bidimensional Gabor wavelet convolution can obtain u × v Gabor amplitude response images.
3. the group rearing pig personal identification method based on machine vision according to claim 1, which is characterized in that in step 3) also
Including vectorial to pig individual images extraction Gabor characteristic, step is as follows:
The gray value of individual Gabor amplitude response image is connected into a row vector by row head and the tail;
Then the row vector that u × v Gabor amplitude responses images are formed is cascaded successively, it is empty to merge different spatial frequencys
Between position and set direction;
This row vector is just as the Gabor characteristic extracted to pig individual images.
4. the group rearing pig personal identification method based on machine vision according to claim 1, which is characterized in that bag in step 4)
The micro-structure details calculated in pig individual images is included, records the different rules that pixel changes in pig image, step is as follows:
Original image is divided into the sub-block of same size;
Then with sub-block center pixel IcGray value for threshold value, the pixel I with its neighborhoodkGray value be compared, compare
Process carry out binary conversion treatment, the point that will be greater than threshold value puts 1, and the point less than threshold value is set to 0, and utilizes equation below:
<mrow>
<mi>Z</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>k</mi>
</msub>
<mo>,</mo>
<msub>
<mi>I</mi>
<mi>c</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mn>1</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>i</mi>
<mi>f</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>I</mi>
<mi>k</mi>
</msub>
<mo>-</mo>
<msub>
<mi>I</mi>
<mi>c</mi>
</msub>
<mo>&GreaterEqual;</mo>
<mn>0</mn>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mn>0</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>i</mi>
<mi>f</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>I</mi>
<mi>k</mi>
</msub>
<mo>-</mo>
<msub>
<mi>I</mi>
<mi>c</mi>
</msub>
<mo><</mo>
<mn>0</mn>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>,</mo>
<mi>k</mi>
<mo>=</mo>
<mn>0</mn>
<mo>,</mo>
<mn>1</mn>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<mn>7</mn>
</mrow>
Result of the comparison does weighted sum according to the position difference of pixel and is treated as a binary sequence, as the sub-block
Local binary patterns encoded radio fl(xc,yc), computational methods are shown below:
<mrow>
<msub>
<mi>f</mi>
<mi>l</mi>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mi>c</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>k</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mn>7</mn>
</munderover>
<mi>Z</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>k</mi>
</msub>
<mo>,</mo>
<msub>
<mi>I</mi>
<mi>c</mi>
</msub>
<mo>)</mo>
</mrow>
<msup>
<mn>2</mn>
<mi>k</mi>
</msup>
</mrow>
In formula, IcRepresent the central point of sub-block, gray value is (xc,yc), IkIt is IcThe pixel of surrounding.It will be from 0 to 1 or 1
Saltus step sum to 0 is no more than situation twice respectively as one mode, and remaining situation merges into one mode jointly.
5. the group rearing pig personal identification method based on machine vision according to claim 1, which is characterized in that in step 4) also
Including extracting local binary patterns characterization step to pig individual images:
By local binary patterns code pattern fl(xc,yc) it is divided into m sub-block { R1,R2,…,Rm};
Then the histogram H of each sub-block is calculatedi,j, it is shown below:
<mrow>
<msub>
<mi>H</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
</msub>
<mo>=</mo>
<munder>
<mi>&Sigma;</mi>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
</munder>
<mi>Q</mi>
<mo>{</mo>
<msub>
<mi>f</mi>
<mi>l</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>i</mi>
<mo>}</mo>
<mi>Q</mi>
<mo>{</mo>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>&Element;</mo>
<msub>
<mi>R</mi>
<mi>j</mi>
</msub>
<mo>}</mo>
<mo>,</mo>
</mrow>
<mrow>
<mi>Q</mi>
<mo>{</mo>
<mi>M</mi>
<mo>}</mo>
<mo>=</mo>
<mfenced open = "{" close = "}">
<mtable>
<mtr>
<mtd>
<mrow>
<mn>1</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>M</mi>
<mo>=</mo>
<mi>t</mi>
<mi>r</mi>
<mi>u</mi>
<mi>e</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mn>0</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>M</mi>
<mo>=</mo>
<mi>f</mi>
<mi>a</mi>
<mi>l</mi>
<mi>s</mi>
<mi>e</mi>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>,</mo>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
<mo>,</mo>
<mn>1</mn>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
<mo>,</mo>
<mi>j</mi>
<mo>=</mo>
<mn>0</mn>
<mo>,</mo>
<mn>1</mn>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<mi>m</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
In formula, flIt is local binary patterns coded image, (x, y) represents the coordinate of pixel, and Q is the pixel for (x, y) to coordinate
Whether point appears in sub-block RjIt is middle judged as a result, M represent Q functions judgement content, Hi,jIt is local the two of single sub-block
It is worth pattern histogram.Then, the histogram of all sub-blocks is cascaded up, forms the local binary patterns feature H of pig image,
Utilize equation below
H=[Hi,j], i=0,1 ..., n-1, j=0,1 ..., m-1
Wherein n represents the different patterns that local binary patterns description is formed.
6. the group rearing pig personal identification method based on machine vision according to claim 1, which is characterized in that step 5) is specific
Process is as follows:
5.1) is taken by 1 sampled point every k row, carries out space down-sampling every k rows for the filtered images of Gabor;
5.2) and then to the Gabor characteristic vector after down-sampling dimensionality reduction is carried out using principal component analytical method;
5.3) to the local binary patterns feature of extraction, dimensionality reduction is carried out using principal component analytical method;
5.4) Gabor characteristic and local binary patterns feature vector are cascaded, obtained to current pig individual images from frequency domain and sky
Between domain extract feature.
7. the group rearing pig personal identification method based on machine vision according to claim 1, which is characterized in that built in step 6)
What vertical grader was trained is described in detail below:The picture of each pig is all corresponding with the label manually marked, based on support
Vector machine carries out multicategory classification using man-to-man method, classification is established between the pig feature of arbitrary two classes difference label
Device, vertical k (k-1)/2 grader of building together, as the model obtained from this group of training set.
8. the group rearing pig personal identification method based on machine vision according to claim 1, which is characterized in that pig in step 6)
Individual identification process detailed process is as follows:To the feature vector extracted from a certain pig image, using in the training process
All graders established respectively judge it is just one ticket of note which pig is judging result, which belong to, most inputs pig at last
Image is classified as the class for possessing most polls.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711284288.XA CN108090426A (en) | 2017-12-07 | 2017-12-07 | A kind of group rearing pig personal identification method based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711284288.XA CN108090426A (en) | 2017-12-07 | 2017-12-07 | A kind of group rearing pig personal identification method based on machine vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108090426A true CN108090426A (en) | 2018-05-29 |
Family
ID=62174024
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711284288.XA Pending CN108090426A (en) | 2017-12-07 | 2017-12-07 | A kind of group rearing pig personal identification method based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108090426A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110222725A (en) * | 2019-05-15 | 2019-09-10 | 广州影子科技有限公司 | Pig checking method, pig veritify device and pig verifying system |
CN110402840A (en) * | 2019-07-25 | 2019-11-05 | 深圳市阿龙电子有限公司 | Live pig monitoring terminal and live pig monitoring system based on image recognition |
CN110826371A (en) * | 2018-08-10 | 2020-02-21 | 京东数字科技控股有限公司 | Animal identification method, device, medium and electronic equipment |
CN111242035A (en) * | 2020-01-14 | 2020-06-05 | 江苏大学 | Local feature-based group-breeding pig identity identification method |
CN118333080A (en) * | 2024-06-13 | 2024-07-12 | 双胞胎(集团)股份有限公司 | Digital cultivation management method and digital cultivation management system |
CN118365847A (en) * | 2023-09-13 | 2024-07-19 | 张宇琦 | Training method for virtual electronic ear tag addition model, ear tag addition method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101154265A (en) * | 2006-09-29 | 2008-04-02 | 中国科学院自动化研究所 | Method for recognizing iris with matched characteristic and graph based on partial bianry mode |
CN105654141A (en) * | 2016-01-06 | 2016-06-08 | 江苏大学 | Isomap and SVM algorithm-based overlooked herded pig individual recognition method |
CN106778784A (en) * | 2016-12-20 | 2017-05-31 | 江苏大学 | Pig individual identification and drinking behavior analysis method based on machine vision |
CN107437068A (en) * | 2017-07-13 | 2017-12-05 | 江苏大学 | Pig individual discrimination method based on Gabor direction histograms and pig chaeta hair pattern |
-
2017
- 2017-12-07 CN CN201711284288.XA patent/CN108090426A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101154265A (en) * | 2006-09-29 | 2008-04-02 | 中国科学院自动化研究所 | Method for recognizing iris with matched characteristic and graph based on partial bianry mode |
CN105654141A (en) * | 2016-01-06 | 2016-06-08 | 江苏大学 | Isomap and SVM algorithm-based overlooked herded pig individual recognition method |
CN106778784A (en) * | 2016-12-20 | 2017-05-31 | 江苏大学 | Pig individual identification and drinking behavior analysis method based on machine vision |
CN107437068A (en) * | 2017-07-13 | 2017-12-05 | 江苏大学 | Pig individual discrimination method based on Gabor direction histograms and pig chaeta hair pattern |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110826371A (en) * | 2018-08-10 | 2020-02-21 | 京东数字科技控股有限公司 | Animal identification method, device, medium and electronic equipment |
CN110222725A (en) * | 2019-05-15 | 2019-09-10 | 广州影子科技有限公司 | Pig checking method, pig veritify device and pig verifying system |
CN110402840A (en) * | 2019-07-25 | 2019-11-05 | 深圳市阿龙电子有限公司 | Live pig monitoring terminal and live pig monitoring system based on image recognition |
CN110402840B (en) * | 2019-07-25 | 2021-12-17 | 深圳市阿龙电子有限公司 | Live pig monitoring terminal and live pig monitoring system based on image recognition |
CN111242035A (en) * | 2020-01-14 | 2020-06-05 | 江苏大学 | Local feature-based group-breeding pig identity identification method |
CN111242035B (en) * | 2020-01-14 | 2023-08-22 | 江苏大学 | Group pig raising identity recognition method based on local characteristics |
CN118365847A (en) * | 2023-09-13 | 2024-07-19 | 张宇琦 | Training method for virtual electronic ear tag addition model, ear tag addition method and device |
CN118333080A (en) * | 2024-06-13 | 2024-07-12 | 双胞胎(集团)股份有限公司 | Digital cultivation management method and digital cultivation management system |
CN118333080B (en) * | 2024-06-13 | 2024-09-13 | 双胞胎(集团)股份有限公司 | Digital cultivation management method and digital cultivation management system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108090426A (en) | A kind of group rearing pig personal identification method based on machine vision | |
Pérez-Zavala et al. | A pattern recognition strategy for visual grape bunch detection in vineyards | |
CN105938564B (en) | Rice disease identification method and system based on principal component analysis and neural network | |
CN107016357B (en) | Video pedestrian detection method based on time domain convolutional neural network | |
Aquino et al. | A new methodology for estimating the grapevine-berry number per cluster using image analysis | |
CN108520226B (en) | Pedestrian re-identification method based on body decomposition and significance detection | |
Roscher et al. | Automated image analysis framework for high-throughput determination of grapevine berry sizes using conditional random fields | |
CN107909005A (en) | Personage's gesture recognition method under monitoring scene based on deep learning | |
CN107909081B (en) | Method for quickly acquiring and quickly calibrating image data set in deep learning | |
CN106778609A (en) | A kind of electric power construction field personnel uniform wears recognition methods | |
CN112136505A (en) | Fruit picking sequence planning method based on visual attention selection mechanism | |
CN109215034A (en) | A kind of Weakly supervised image, semantic dividing method for covering pond based on spatial pyramid | |
Tamilselvi et al. | Unsupervised machine learning for clustering the infected leaves based on the leaf-colours | |
CN112464730B (en) | Pedestrian re-identification method based on domain-independent foreground feature learning | |
CN104036250B (en) | Video pedestrian detection and tracking | |
CN107153819A (en) | A kind of queue length automatic testing method and queue length control method | |
CN112883915A (en) | Automatic wheat ear identification method and system based on transfer learning | |
CN104008404B (en) | Pedestrian detection method and system based on significant histogram features | |
CN108960276B (en) | Sample expansion and consistency discrimination method for improving spectral image supervision classification performance | |
Sood et al. | Image quality enhancement for Wheat rust diseased images using Histogram equalization technique | |
Zeng et al. | Nearest neighbor based digital restoration of damaged ancient Chinese paintings | |
CN108256557A (en) | The hyperspectral image classification method integrated with reference to deep learning and neighborhood | |
CN103150573B (en) | Based on the nerve dendritic spine image classification method of many resolving power fractal characteristic | |
CN110569716A (en) | Goods shelf image copying detection method | |
Shireesha et al. | Citrus fruit and leaf disease detection using DenseNet |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180529 |