CN110176020A - A kind of bird's nest impurity method for sorting merging 2D and 3D rendering - Google Patents

A kind of bird's nest impurity method for sorting merging 2D and 3D rendering Download PDF

Info

Publication number
CN110176020A
CN110176020A CN201910282067.1A CN201910282067A CN110176020A CN 110176020 A CN110176020 A CN 110176020A CN 201910282067 A CN201910282067 A CN 201910282067A CN 110176020 A CN110176020 A CN 110176020A
Authority
CN
China
Prior art keywords
image
bird
nest
feather
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910282067.1A
Other languages
Chinese (zh)
Inventor
黄琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201910282067.1A priority Critical patent/CN110176020A/en
Publication of CN110176020A publication Critical patent/CN110176020A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A23FOODS OR FOODSTUFFS; TREATMENT THEREOF, NOT COVERED BY OTHER CLASSES
    • A23LFOODS, FOODSTUFFS, OR NON-ALCOHOLIC BEVERAGES, NOT COVERED BY SUBCLASSES A21D OR A23B-A23J; THEIR PREPARATION OR TREATMENT, e.g. COOKING, MODIFICATION OF NUTRITIVE QUALITIES, PHYSICAL TREATMENT; PRESERVATION OF FOODS OR FOODSTUFFS, IN GENERAL
    • A23L33/00Modifying nutritive qualities of foods; Dietetic products; Preparation or treatment thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Abstract

The invention discloses a kind of bird's nest impurity method for sorting for merging 2D and 3D rendering, include the following steps: S1, the reconstruct and identification of bird's nest impurity;S1.1, image preprocessing;Image preprocessing, it is a very important link in bird's nest identification process, when Image Acquisition is subjected to the interference of various noises and surrounding enviroment, not only affects the effect of image, but also required relevant information is often submerged wherein, make troubles to subsequent feature extraction;The present invention can be improved the working efficiency of bird's nest feather impurity sorting, effectively reduce the production cost of bird's nest;Bird's nest Product Precision than manually picking is high, and the bird's nest feather impurity that can carry out long-time stable picks work;The quality that bird's nest product can be improved after introducing the method for the present invention, reduces the omission factor and false detection rate of bird's nest product, obtains reliable, stable and accurate detection bird's nest product.

Description

A kind of bird's nest impurity method for sorting merging 2D and 3D rendering
Technical field
The present invention relates to bird's nest impurities identification technical fields, and in particular to a kind of bird's nest impurity point of fusion 2D and 3D rendering Pick method.
Background technique
The rejecting of feather impurity is the important procedure in bird's nest secondary industry.It is people that bird's nest feather impurity, which picks take, at present Work chooses cant formula, and manually choosing hair has following having: on the one hand, manually choosing the subjectivity that cant formula depends on people Judgement and experience are difficult to provide a reliable, stable and accurate detection result;It does not seek unity of standard manually, false detection rate and leakage Inspection rate is high, leads to sorting product is irregular to damage company interest;On the other hand, bird's nest feather working environment and long-term people are picked Work industry, also there is worker's eyes, cervical vertebra and body and mind compared with major injury.
Summary of the invention
The purpose of the present invention is to overcome the shortcomings of the existing technology and deficiency, provides a kind of bird's nest for merging 2D and 3D rendering Impurity method for sorting, this method are based on machine vision technique and carry out bird's nest impurities identification and positioning, and working efficiency can be improved, and reduce Labour cost greatly enhances enterprise's competitiveness in the market.
The purpose of the invention is achieved by the following technical solution:
A kind of bird's nest impurity method for sorting merging 2D and 3D rendering, includes the following steps:
S1, the reconstruct and identification of bird's nest impurity;
S1.1, image preprocessing;
Image preprocessing, is a very important link in bird's nest identification process, and when Image Acquisition is subjected to various make an uproar The interference of sound and surrounding enviroment not only affects the effect of image, but also required relevant information is often submerged wherein, gives Subsequent feature extraction is made troubles;For filter out noise jamming, improving image quality, it is prominent needed for relevant information, to bird's nest It needs to do relevant image preprocessing before impurities identification and detection;
S1.1.1, image filtering;
In bird's nest image transmitting and treatment process, often by various noise pollutions, there is the dim spot of image or bright spot interference, While reducing picture quality, the accuracy of feature extraction in image procossing is had an effect on;Therefore, effective image filtering is chosen Algorithm, which solves noise bring, to be influenced, and common Image filter arithmetic has: mean filter method in frequency domain filtering method, spatial domain and Median filtering method;
The median filtering algorithm is that a kind of neighborhood operation will when calculating process is to image progress median filter process Respective value in template is arranged by ascending sequence, then the median of this column data is assigned to the pixel of template center position Point;Wherein, if template has odd number point, the gray value of the intermediary image vegetarian refreshments after sequence by size is arranged is as median;If Template has even number point, and the gray value after sequence arranges by size is located in the middle the average value of two values as median;
It is too big to make edge blurry since median filtering effect depends on the size of filter window, it is too small, denoise effect Undesirable, then improve to median filtering algorithm: whether progressive scanning picture judges the pixel when handling each pixel It is the maximum value or minimum value that filter window covers lower neighborhood territory pixel;If so, just should using the processing of normal median filtering algorithm Pixel;If it is not, then not managing it;
Image filtering is carried out using improved 3 × 3 median filtering algorithm;
S1.1.2, image enhancement;
Enhance picture contrast using piecewise linear transform function, actually contrast between enhancing original image each section, i.e., Enhance interested gray areas in input picture, it is opposite to inhibit those uninterested gray areas;
The form of the piecewise linear transform function is as follows:
Wherein, (x1,x2) and (y1,y2) it is major parameter in above formula (3.1), it is described according to algorithmic function, it is known that x1With x2It is the grey level range for limiting process object and needing to convert, and y1And y2Then determine the slope of linear transformation;
S1.2, image segmentation;
Image segmentation uses thresholding method, is separated background and object by choosing optimal threshold, to carry out figure As segmentation;Image judgement is carried out by one reasonable threshold value of setting, the gray scale of those of given threshold range part will be met Value is set as 0, otherwise is set as 1, so that interesting target be separated from image, and generates bianry image;
Threshold segmentation is that the input f of image is transformed to output g, is converted as follows:
In above formula, T is given threshold, and g (i, j)=0 indicates that the pictorial element of background parts, g (i, j)=1 indicate target The pictorial element of object parts;Threshold segmentation is that is, scan its all pixels to image f, if f (i, j) >=T, image after segmentation Element g (i, j), be exactly the pixel of object;It otherwise is exactly background pixel;
S1.3, feature selecting and feather extrinsic region extract;
After obtaining various interested regions by image segmentation, can use simple region description should as representative The feature in region, and these provincial characteristics are combined into feature vector so that classification uses;
Wherein, the simple region describes son as in perimeter, area, compactness, the mass center in region, gray average, gray scale Value, the minimum rectangle of inclusion region, minimum or maximum gray scale, pixel number and Euler's numbers more than or less than mean value;
S1.4, the positioning of bird's nest feather extrinsic region;
When to the feather impurity positioning identified, need to classify to it and correct;
S1.4.1, the classification of feather extrinsic region;
Judge that each feather extrinsic region mass center, can be by each plumage to value d between self zone minimum range point using Euclidean distance Hair extrinsic region is divided into two classes;Finding out feather extrinsic region center-of-mass coordinate by mass center formula (4.4), (4.5) is (R, C), then Euclidean distance formula is as follows:
If d=0, which belongs to the first kind, i.e. mass center is fallen in feather region;If d ≠ 0, the feather area Domain belongs to the second class, i.e. mass center is fallen in outside feather region;
Wherein, it is exactly required to fall in feather region for first kind mass center, and the second class feather region then needs to be modified, and will fall Mass center obtains the new point (i fallen in feather region by correction algorithm outside the second class feather regionc,jc);
S1.4.2, mass center amendment;
In order to correct mass center outside feather region, the straight line for introducing minimum external oval major semiaxis extends algorithm;
S1.5,3D bird's nest impurity are rebuild;
Area array cameras 2D image is registrated with the solid of depth camera 3D rendering to be looked for using space geometry coordinate transformation relation To the corresponding relationship of the pixel coordinate between two images;First respectively to the 2D bird's nest image of area array cameras and 3D camera The center-of-mass coordinate in the bird's nest region image zooming-out Mark and the region Mark;Then two width or multiple image of area array cameras are carried out again Feather impurity characteristics are extracted, and the center-of-mass coordinate of bird's nest feather extrinsic region is obtained;Then it is found out according to Mark point reduction formula Bird's nest feather impurity characteristics point region in the 3D bird's nest image matched;The matching of image is finally completed, 3D model is generated;
S1.5.1, the acquisition and bird's nest impurity Model Reconstruction of bird's nest feather impurity point cloud data;
Specified bird's nest plumage is generated to 3D bird's nest original image by dividing accordingly according to the 3D bird's nest extrinsic region of generation Then its 3D bird's nest feather extrinsic region picture breakdown is X, Y comprising three-dimensional coordinate, Z coordinate point by hair extrinsic region image The image of information, and 3D bird's nest feather impure point cloud is converted by three-dimensional point X, Y, Z-image, obtain discrete bird's nest contaminant surface Three-dimensional feature point;In order to rebuild bird's nest contaminant surface, also trigonometric ratio is carried out to it, finally reconstruct bird's nest feather impurity Surface;
S1.5.2, the identification of bird's nest feather impurity characteristics;
The corresponding relationship between two breadth array camera bird's nest images and 3D camera bird's nest image is acquired using Mark point formula, Three dimensional representation of the point of all bird's nest extrinsic regions acquired in 3D bird's nest image;
According to the 3D bird's nest extrinsic region of generation, 3D bird's nest original image is reduced to specified bird's nest feather extrinsic region figure Picture, then again by 3D bird's nest feather extrinsic region picture breakdown be the x comprising 3D point, y, z coordinate image, wherein z image be height Spend image;Using z image as process object, 3D bird's nest feather impurity coordinate (x, y) is found out with mass center formula, then 3-D image In each two-dimensional coordinate point correspond to a fixed height value Z;Since the gray value of z image is exactly the height value Z of its image, It first finds out using center-of-mass coordinate in z image as the center of circle, is taken respectively with bird's nest feather extrinsic region minimum annulus, i.e., minimum annulus+30 Gray value mean value Mean and deviation D eviation between the circle difference of+60 pixels of a pixel and minimum annulus, can be used to Lower formula description:
Wherein, height Z are as follows:
Z=Mean-Mean1 (3.6)
So as to obtain the three-dimensional coordinate (Row3m, Colm3m, Z) in each feather region from 3D camera;
S2, experiment;
S2.1, camera calibration;
S2.1.1, distortion correction;
Camera lens used in usual vision system all exist to distort in various degree, and the excentric distance of pixel will affect Pattern distortion degree, range image center is closer, and distortion is just smaller, and this distortion belongs to non-linear type, can use following formula Description:
In above formula,The distortionless pixel ideal coordinates for meeting linear imaging model are represented, (x, y) represents real Border picture point coordinate, δxAnd δyIt is nonlinear distortion value, position is related in the picture to picture point for it, it can be indicated with following formula:
Wherein, δxOr δyFirst item be Radial distortion radial distortion, Section 2 Centrifugal Distortion centrifugal distortion, Section 3 are the distortion of Thin prism thin prism, and the coefficient in formula is known as nonlinear distortion ginseng Number;And nonlinear distortion parameter can cause the unstable of solution when introducing excessive, influence precision raising;It therefore, can be by formula (4.2) Abbreviation are as follows:
Thus it is apparent that, distortion increases, i.e., fractional distortion remote from picture centre as radial radius increases It is more serious;
S2.1.2, Mark point are chosen;
By Mark point, detection object coordinate in two dimensional image is transformed into 3-D image, the three-dimensional after judging conversion Whether coordinate is detection object position to realize camera calibration target;
Image Acquisition is carried out using the mode that area array cameras and 3D camera combine, Mark point, which is chosen, has certain altitude, Color uniformly, the circle or rectangle of regular shape;
Mark point and detection object have the characteristics that gray scale and shape difference are obvious, first use gray threshold method by Mark point Region is split, then is identified Mark point with feature extracting method;
S2.1.3, Mark point center-of-mass coordinate calculate;
The Mark point in two dimensional image and 3-D image is identified respectively using above-mentioned Mark point recognition methods, is then calculated Respective Mark point center-of-mass coordinate;To a width 2D discretization digital picture, f (x, y) >=0, p+q rank square MpqWith central moment μpqIt is fixed Justice are as follows:
In above formula, (ic,jc) it is center-of-mass coordinate, and
Therefore, the Mark point center-of-mass coordinate of two dimensional image and 3-D image, respectively (i are found out by above-mentioned formula1,j1) and (i2,j2);
S2.1.4, image coordinate convert Mark point reduction formula;
According to Mark point coordinate, to be converted by coordinate, the coordinate of detection object is transformed into 3-D image in two dimensional image, If the coordinate points position of converted obtained 3-D image is exactly detection object position, marked between plane camera and 3D camera It is fixed to complete;
S2.2, experimental result;
S2.2.1, Mark point center-of-mass coordinate;
Two dimensional image Mark point and 3-D image Mark point mass center are calculated with mass center formula;
S2.2.2, bird's nest extrinsic region center-of-mass coordinate;
Two empty array R and C are created, the row coordinate and column coordinate of storage feather region mass center are respectively intended to;In 2D bird's nest 20 bird's nest feather extrinsic regions are detected in figure, region center-of-mass coordinate is (R, C);
S2.2.3, bird's nest extrinsic region coordinate;
Two empty array Rows and Columns are created, the row coordinate and column coordinate of storage feather region mass center are respectively intended to; Detect that 20 bird's nest feather extrinsic regions, area coordinate are (Rows, Columns) in 2D bird's nest figure;
Wherein 3D bird's nest feather extrinsic region coordinate can be calculated according to Mark point reduction formula, create two empty numbers Group Rows3m and Columns3m is respectively intended to the row coordinate and column coordinate of the feather region mass center in storage 3D bird's nest figure;
S2.2.4, bird's nest impurity 3D model;
Using the bird's nest feather extrinsic region 3D coordinate (Row3m, Colm3m, Z) obtained, bird's nest impurity 3D mould is generated Type figure;
It can be seen that the specific location and size of bird's nest impurity, facilitate bird's nest impurity to pick;But this is bird's nest The impurity on surface layer, bird's nest inside not detected without the exposed feather impurity on surface;So can will be sorted The method that the bird's nest recycling area array cameras and 3D camera on surface layer combine carries out impurity Image Acquisition inside bird's nest, then right Image carries out above-mentioned same method processing, and bird's nest feather extrinsic region is sorted out.
Preferably, the straight line extension algorithmic procedure of minimum external oval major semiaxis is specific as follows in the step S1.4.2:
Step1: seeking belonging to the minimum external ellipse in the second class feather region, to obtain the minimum external ellipse in each region Major semiaxis a;
Step2: with mass center outside region (R, C) for starting point, mass center to region minimum range point (i1, j1) it is terminal, connection two Point obtains line segment L;
Step3: with mass center to region minimum range point (i1, j1) it is starting point, make straight line L extended line, extending length obtains for a To new straight line M;
Step4: seeking new straight line M and each feather region intersection point (m, n), calculates point (m, n) and mass center to region most narrow spacing From point (i1, j1) central point (p, q), the point be correct after required point.
Preferably, calibration process is specific as follows in the step S2.1.4:
Step1: 2D image Mark point coordinate (i is found out1,j1) and 2D image extrinsic region coordinate (Rows, Columns);
Step2: find out in 2D image the line-spacing of Mark point coordinate and 2D bird's nest extrinsic region coordinate and column away from:
Row=Rows-i1
Col=Columns-j1, (4.6)
Step3: the ratio between the ratio between image length between 2D image and 3D rendering and width are calculated:
(L2DLong, the L for 2D image3DIt is long for 3D rendering)
Step4: 3D bird's nest extrinsic region row coordinate and column coordinate are calculated using Mark point:
Row3m=KL*i2+Row
Colm3m=KW*j2+Col, (4.8)
Steps5: obtained Row3m, Colm3m are exactly each feather region two-dimensional coordinate in 3-D image;
Steps6: each two-dimensional coordinate point corresponds to a fixed height value z in 3-D image, so that it may from 3D camera In obtain the three-dimensional coordinate (Row3m, Colm3m, Z) in each feather region;
Here, one permission maximum deviation σ=0.5mm of setting, is permitting if finding out three-dimensional coordinate (Row3m, Colm3m, Z) Perhaps in maximum deviation, then camera calibration is completed;Otherwise, distortion correction is carried out to camera lens again, then returns to Step1 again Start to calculate, and due to allowing the precision of maximum deviation to be 0.1mm, error is calculated to reduce, result is recalculated in reservation Precision be 0.01mm.
The present invention have compared with prior art it is below the utility model has the advantages that
The present invention can be improved the working efficiency of bird's nest feather impurity sorting, effectively reduce the production cost of bird's nest;Than The bird's nest Product Precision manually picked is high, and the bird's nest feather impurity that can carry out long-time stable picks work;Introduce the present invention The quality of bird's nest product can be improved after method, reduce the omission factor and false detection rate of bird's nest product, obtain reliable, stable and quasi- True detection bird's nest product;At the same time, the labor intensity of worker is reduced, the damage of worker's eyes, cervical vertebra and body and mind is reduced;Also Working efficiency can be improved, reduce labour cost, greatly enhance enterprise's competitiveness in the market.
Detailed description of the invention
Fig. 1 is bird's nest feather impurities identification of the present invention and positioning flow figure;
Fig. 2 is 2D bird's nest original image of the invention;
Fig. 3 is 3D bird's nest original image of the invention;
Fig. 4 is median filtering algorithm schematic diagram of the invention;
Fig. 5 is gray scale piecewise linear transform schematic diagram of the invention;
Fig. 6 is the 2D bird's nest figure after Threshold segmentation background of the present invention;
Fig. 7 is the improved median filtering bird's nest figure of the present invention;
Fig. 8 is bird's nest extrinsic region schematic diagram of the invention;
Fig. 9 is 2D bird's nest feather impurity center of mass point schematic diagram of the invention;
Figure 10 is 3D bird's nest extrinsic region schematic diagram of the invention;
Figure 11 is three-dimensional bird's nest impurity depth image process flow diagram of the invention;
Figure 12 is bird's nest impurity mass center three-dimensional height value schematic diagram of the invention;
Figure 13 is 2D bird's nest Mark point area schematic of the invention;
Figure 14 is 3D bird's nest Mark point area schematic of the invention;
Figure 15 is two-dimentional Mark point mass center schematic diagram of the invention;
Figure 16 is three-dimensional Mark point mass center schematic diagram of the invention;
Figure 17 is the bird's nest impurity 3D illustraton of model that the present invention generates.
Specific embodiment
Present invention will now be described in further detail with reference to the embodiments and the accompanying drawings, but embodiments of the present invention are unlimited In this.
One, the bird's nest impurity method for sorting of a kind of fusion 2D of the present invention and 3D rendering is asked for the sorting of bird's nest feather impurity Topic, using bird's nest as test object, propose the working method of area array cameras and 3D camera combination, using area array cameras advantage into The sorting of row bird's nest feather region impurity, then bird's nest impurity figure height value in 3D rendering is obtained by 3D camera, form extrinsic region Three-dimensional information, the threedimensional model of bird's nest feather extrinsic region is generated then in conjunction with obtained bird's nest feather impurity three-dimensional information, Convenient picking into bird's nest impurity.
As shown in Fig. 1~17, specifically:
Two, the fusion method of 2D image and 3D rendering.
Due to bird's nest and its feather impurity be it is in irregular shape, rugged, need using three-dimensional coordinate it is miscellaneous to its Matter position is described.According to the shape and gamma characteristic of bird's nest and its feather impurity, by repeatedly it is demonstrated experimentally that proposing face The working method of array camera and 3D camera combination carries out bird's nest feather region impurity using the advantage of area array cameras and sorts, then leads to It crosses 3D camera and obtains bird's nest impurity figure height value in 3D rendering, form the three-dimensional information of extrinsic region.As shown in Figure 1, for invention Bird's nest feather impurity treatment flow chart.
The present invention is identified and is positioned to bird's nest feather impurity using bird's nest as test object, based on machine vision technique, It is main to complete following work:
(1) be for bird's nest and its feather impurity it is in irregular shape, rugged, need using three-dimensional coordinate to it The problem of impurity position is described, the present invention propose the working method of area array cameras and 3D camera combination, utilize area array cameras Bird's nest feather region impurities identification is carried out, picture altitude value is then obtained by 3D camera, obtains extrinsic region three-dimensional information;
(2) it is directed to feather impurities identification, after acquisition image preprocessing, proposes that the second iteration based on gray level selects threshold Value method highlights feather extrinsic region, then further accurately identifies feather impurity by Shape Feature Extraction.Experiment knot Fruit shows that its accuracy of identification is high, omission factor is low, outclass manual detection mode, meets design requirement of the present invention;
(3) it is directed to extrinsic region detection and localization, is classified by feather region of the Euclidean distance mode to identification, is proposed The straight line of the external oval major semiaxis of the minimum in region extends method to correct the region mass center fallen in outside feather region, and proposes to be based on It is three-dimensional to obtain feather extrinsic region in conjunction with every bit height value in the available image of 3D camera for Mark point location matching process Information.This method calculates simply, and high-efficient, position error is in 0.5mm precision;
(4) it is tested and is analyzed finally, mentioning each algorithm to the present invention.The experimental results showed that the mentioned method of the present invention Detection accuracy is high, and omission factor is low, and false detection rate is lower than 4%, and time consumption is far fewer than manual detection mode, and not by human factor etc. Interference, various aspects of performance achieves the desired results and actual production and processing requirement.
Three, the reconstruct of bird's nest impurity and recognition methods.
The present invention carries out image preprocessing first, it can be seen that bird's nest original image background is complex from Fig. 2 and Fig. 3, So first to remove image background, made by median filtering filtering image noise using bird's nest and its feather impurity gray difference Enhancing bird's nest region and background contrasts are converted with linear segmented, and then proposes to select automatic threshold based on gray level second iteration Method comes background segment;Mark point region is extracted from the image after removal background, then finds out Mark point center-of-mass coordinate;By Mix a small amount of erroneous detection region in the feather extrinsic region that primary segmentation comes out, then, proposes the selection of feather impurity characteristics and feature Matching process rejects erroneous detection region and sorts out feather extrinsic region again;It is positioned for feather impurity, the present invention proposes Two dimensional image mesoptile extrinsic region coordinate is transformed into 3-D image by Mark independent positioning method, available using 3D camera Every bit height value in image obtains feather extrinsic region three-dimensional information, finally utilizes feather extrinsic region three-dimensional coordinate.
3D bird's nest impurity model is generated, bird's nest impurity is conveniently picked.
3.1, image preprocessing;
Image preprocessing is a very important link in bird's nest identification process.Various make an uproar is subjected to when Image Acquisition The interference of sound and surrounding enviroment not only affects the effect of image, but also required relevant information is often submerged wherein, gives Subsequent feature extraction is made troubles.For filter out noise jamming, improving image quality, it is prominent needed for relevant information, to bird's nest It needs to do relevant image preprocessing before impurities identification and detection.Wherein, common Image Pretreatment Algorithm has: image filtering, figure Image intensifying, image segmentation, Morphological scale-space etc..Using Image Pretreatment Algorithm, both made the post-processing of object more easy, The better effect that can be taken.
3.1.1, image filtering;
The effect of image filtering is to try to retain minutia in acquired image, eliminates or weakening is mixed into target image In garbage.Image filtering is that one kind can enhance image recognition effect with rich image information content, improving image quality Processing method, treatment effect directly affect subsequent image processing process, and with the validity of feature identification link and reliably Property is closely related, is an indispensable important step during image preprocessing.
In bird's nest image transmitting and treatment process, often by various noise pollutions, there is the dim spot of image or bright spot interference, While reducing picture quality, the accuracy of feature extraction in image procossing is had an effect on.Therefore, it will usually choose effective figure It is influenced as filtering algorithm solves noise bring.Common Image filter arithmetic has: the mean value filter in frequency domain filtering method, spatial domain Wave method and median filtering method etc..
Median filtering is a kind of nonlinear signal processing method.Substantially, median filtering is a kind of sort method filtering Device.To the point (i, j) in original image, the sort method intermediate values of median filtering all pixels in the field using centered on the point as The response of (i, j) point.Compared to frequency domain filtering and Mean Filtering Algorithm, not only arithmetic speed is fast for median filtering, to isolated noise Pixel (such as impulsive noise, salt-pepper noise) have very good filter effect, and handle after image than more visible, moreover it is possible to be effectively retained The useful marginal information of image.
Median filtering algorithm is a kind of neighborhood operation, and calculating process is when carrying out median filter process to image, by template On respective value by ascending sequence arrange, then the median of this column data is assigned to the pixel of template center position. Wherein, if module has odd number point, the gray value of the intermediary image vegetarian refreshments after sequence by size is arranged is as median;Work as template When having even number point, the gray value after sequence by size is arranged is located in the middle the average value of two values as median, Implementation method is as shown in Figure 4.In practical application, need to choose shape of template and size in conjunction with actual conditions.
It is too big to make edge blurry since median filtering effect depends on the size of filter window, it is too small, denoise effect It is undesirable.The present invention has carried out following improvement: progressive scanning picture to median filtering algorithm, judges when handling each pixel Whether the pixel is maximum value or minimum value that filter window covers lower neighborhood territory pixel;If so, just using normal median filtering The algorithm process pixel;If it is not, then not managing it.The present invention carries out image filter using improved 3 × 3 median filtering algorithm Wave, the effect after improved median filter process is utilized are as shown in Figure 7.
3.1.2, image enhancement;
It is actually anti-between enhancing original image each section using the method for piecewise linear transform function enhancing picture contrast Interested gray areas in difference, namely enhancing input picture, it is opposite to inhibit those uninterested gray areas.Piecewise linearity Transformation advantage essentially consists in its form and can arbitrarily synthesize.
The form of piecewise linear transform function is as follows:
(x1,x2) and (y1,y2) it is major parameter in formula (3.1).According in the description of algorithmic function, it is known that x1And x2It is It defines and deals with objects the grey level range that needs are converted, and y1And y2Then determine the slope of linear transformation.
Work as x1,x2,y1,y2When taking the combination of different value respectively, it is different to obtain transform effect.Its piecewise linearity becomes It is as shown in Figure 5 to change function graft.
3.2, image segmentation;
Image segmentation is to divide the image into several with unique property, specific region, and therefrom extract sense Targets of interest usually has similitude and discontinuity.
Selected threshold split plot design of the present invention is separated background and object by choosing optimal threshold, to carry out image Segmentation.Threshold segmentation method is to carry out image judgement by one reasonable threshold value of setting, by meet given threshold range that The gray value of a little parts is set to 0, otherwise is set to 1, so that interesting target be separated from image, and generates binary map Picture.Carrying out image threshold segmentation is indispensable important link in image procossing.
Threshold segmentation is that the input f of image is transformed to output g, is converted as follows:
In above formula, T is given threshold, and g (i, j)=0 indicates that the pictorial element of background parts, g (i, j)=1 indicate target The pictorial element of object parts (vice versa).Threshold segmentation is that is, scan its all pixels to image f, if f (i, j) >=T, The element g (i, j) of image after segmentation, is exactly the pixel of object, otherwise is exactly background pixel.
Threshold segmentation realizes that effect is as shown in Figure 6.
3.3, feature selecting and feather extrinsic region extract;
After obtaining various interested regions by image segmentation, it can use some sub- conducts of simple region description Represent the feature in the region.These provincial characteristics are usually combined into feature vector so that classification uses.Common simple region Describe son such as perimeter, area, compactness, the mass center in region, gray average, gray scale intermediate value, the minimum rectangle of inclusion region, minimum Or maximum gray scale, the pixel number more than or less than mean value and Euler's numbers etc..
Bird's nest feather region is carried out by feature selecting, it is that 4, S is that the present invention, which selects the empirical value setting LW value of feature, 3.8, area Area are arranged in (800,10000).For recognition result as shown in figure 8, wherein a figure is 2D bird's nest original image, b figure is bird's nest Extrinsic region.
3.4, the positioning of bird's nest feather extrinsic region;
(1) feather extrinsic region is classified.When to the feather impurity positioning identified, need to classify to it and correct.Benefit It is gone to judge that each feather extrinsic region mass center, can be by each feather impurity range to value d between self zone minimum range point with Euclidean distance Domain is divided into two classes.Finding out feather extrinsic region center-of-mass coordinate by mass center formula (4.4), (4.5) is (R, C), and Euclidean distance is public Formula is as follows:
If d=0, which belongs to the first kind, i.e. mass center is fallen in inside feather region;If d ≠ 0, the feather region Belong to the second class, mass center is fallen in outside feather region.Wherein first kind mass center is fallen in required by the exactly present invention of feather region, and the Two class feather regions then need to be modified, and will fall in mass center outside the second class feather region and obtain new fall in by correction algorithm Point (i inside feather regionc,jc)。
(2) mass center is corrected.In order to correct mass center outside feather region, present invention introduces the straight lines of minimum external oval major semiaxis Extension method.Algorithmic procedure is as follows:
Step1: seeking belonging to the minimum external ellipse in the second class feather region, to obtain the minimum external ellipse in each region Major semiaxis a;
Step2: with mass center outside region (R, C) for starting point, mass center to region minimum range point (i1, j1) it is terminal, connection two Point obtains line segment L;
Step3: with mass center to region minimum range point (i1, j1) it is starting point, make straight line L extended line, extending length obtains for a To new straight line M;
Step4: seeking new straight line M and each feather region intersection point (m, n), calculates point (m, n) and mass center to region most narrow spacing From point (i1, j1) central point (p, q), the point be correct after required point.
Each feather extrinsic region center of mass point is as shown in Figure 9.
Bird's nest extrinsic region is picked out, 2D bird's nest extrinsic region coordinate is then generated, as shown in Fig. 8 (b), further according to Mark point reduction formula generates 3D bird's nest impurity coordinate, root by 2D extrinsic region coordinate transformation at 3D bird's nest extrinsic region coordinate 3D bird's nest extrinsic region is generated according to these coordinate points.The process of generation is as shown in Figure 10, is wherein original in the 2D black box in a figure Bird's nest extrinsic region, b figure be bird's nest extrinsic region in 2D grey box, c figure is 3D bird's nest extrinsic region.
3.5,3D bird's nest impurity are rebuild;
Area array cameras 2D image is registrated with the solid of depth camera 3D rendering to be looked for using space geometry coordinate transformation relation To the corresponding relationship of the pixel coordinate between two images.It first has to respectively to the 2D bird's nest image of area array cameras and 3D camera The bird's nest region image zooming-out Mark and the region Mark center-of-mass coordinate;Then again to two width of area array cameras or multiple image into Row feather impurity characteristics are extracted, and the center-of-mass coordinate of bird's nest feather extrinsic region is obtained;Then it is found out according to Mark point reduction formula Bird's nest feather impurity characteristics point region in matched 3D bird's nest image;The matching of image is finally completed, 3D model is generated.
3.5.1, the acquisition of bird's nest feather impurity point cloud data and bird's nest impurity Model Reconstruction
It is generated specified according to the 3D bird's nest extrinsic region generated in Figure 10 to 3D bird's nest original image by dividing accordingly Bird's nest feather extrinsic region image, then by its 3D bird's nest feather extrinsic region picture breakdown be X, Y comprising three-dimensional coordinate, The image of Z coordinate point information, by converting 3D bird's nest feather impure point cloud for three-dimensional point X, Y, Z-image, obtain be only from The three-dimensional feature point of scattered bird's nest contaminant surface will also carry out trigonometric ratio to it to reconstruct bird's nest contaminant surface, final weight Structure goes out the surface of bird's nest feather impurity.Bird's nest feather impurity reconstruction process is as shown in figure 11.
3.5.2, bird's nest feather impurity characteristics identify;
The corresponding relationship between two breadth array camera bird's nest images and 3D camera bird's nest image is acquired using Mark point formula, Three dimensional representation of the point of all bird's nest extrinsic regions acquired in 3D bird's nest image.
According to the 3D bird's nest extrinsic region of above-mentioned generation, 3D bird's nest original image is reduced to specified bird's nest feather extrinsic region Then its 3D bird's nest feather extrinsic region picture breakdown is being the x comprising 3D point, y, z coordinate image, z figure therein by image As being height image.Using z image as process object, 3D bird's nest feather impurity coordinate (x, y) is found out with mass center formula, it is three-dimensional Each two-dimensional coordinate point corresponds to a fixed height value Z in image.Since the gray value of z image is exactly the height of its image Value Z can be asked first and be taken respectively using center-of-mass coordinate in z image as the center of circle with bird's nest feather extrinsic region minimum annulus, and minimum annulus+ 30 pixels, gray value mean value Mean and deviation D eviation between the circle difference of+60 pixels of minimum annulus, can To be described with following formula:
Wherein, height Z:
Z=Mean-Mean1 (3.6)
To which the three-dimensional coordinate (Row3m, Colm3m, Z) in each feather region can be obtained from 3D camera.Figure 12 is root The height value calculated according to formula (3.4) (3.5).
Recognition methods main thought based on 3D feature is: the depth information of bird's nest image is obtained using 3D camera, then by Depth information obtains 3D point cloud model, then from 3D point cloud model extract 3D Feature Descriptor as, target sizes, shape, side Boundary etc. finally carries out target identification using these 3D features.Method accuracy of identification with higher and robust based on 3D feature Property, identification while can be realized multiple targets.
Four, it tests.
4.1, camera calibration;
The present invention combines mode using area array cameras and 3D camera and carries out Image Acquisition, uses traditional camera scaling method It can not accomplish area array cameras and 3D camera calibration requirement.By testing repeatedly, the present invention proposes Mark point methods to X-Y scheme As calculating between coordinate and 3-D image coordinate, positioning requirements can be realized without traditional camera calibration way.Due to major part All there is certain camera distortion in camera lens, therefore this section first carries out the analysis of camera distortion correction.
4.1.1, distortion correction;
Camera lens used in usual vision system all exist to distort in various degree, and the excentric distance of pixel will affect Pattern distortion degree, range image center is closer, and distortion is just smaller.This distortion belongs to non-linear type, can use following formula Description:
In formula,The distortionless pixel ideal coordinates for meeting linear imaging model are represented, (x, y) represents practical Picture point coordinate, δxAnd δyIt is nonlinear distortion value, position is related in the picture to picture point for it, it can be indicated with following equation:
Wherein, δxOr δyFirst item be radial distortion (Radial distortion), Section 2 is centrifugal distortion (Centrifugal distortion), Section 3 are that thin prism distorts (Thin prism).Coefficient in formula is referred to as non-thread Sex distortion parameter.It was found that nonlinear parameter introduces excessively causes the unstable of solution sometimes, precision raising is influenced.In general, non- Linear distortion is enough to be described with the first item radial distortion in above formula.It therefore, can be by formula (4.2) abbreviation are as follows:
It is apparent that distortion with radial radius increase and increase, i.e., fractional distortion remote from picture centre compared with Seriously.
4.1.2, Mark point is chosen;
By testing repeatedly, detection object coordinate in two dimensional image is transformed into 3-D image by Mark point by the present invention In, whether the three-dimensional coordinate after judging conversion is detection object position to realize camera calibration target.
The present invention carries out Image Acquisition using the new paragon that area array cameras and 3D camera combine, and Mark point is chosen especially heavy It wants, it directly influences system accuracy.To improve accuracy of identification and speed, Mark point should get colors uniformly, shape rule Circle or rectangle then should also have certain altitude (flushing with bird's nest surface) for convenience of identification.The present invention from geometry and The selection of Mark point is carried out in terms of color two, the circle and elliptic region in Figure 13 and Figure 14 are Mark point of the invention.
Mark point and detection object have the characteristics that gray scale and shape difference are obvious, and the present invention first uses gray threshold method will Mark point region is split, then is identified Mark point with feature extracting method.
4.1.3, Mark point center-of-mass coordinate calculates;
The Mark point in two dimensional image and 3-D image is identified respectively using above-mentioned Mark point recognition methods, is then calculated Respective Mark point center-of-mass coordinate.To a width 2D discretization digital picture image, f (x, y) >=0, p+q rank square MpqAnd central moment μpqIs defined as:
In formula, (ic,jc) it is center-of-mass coordinate, and
The Mark point center-of-mass coordinate of two dimensional image and 3-D image, respectively (i are found out by above-mentioned formula1,j1) and (i2, j2)。
Wherein, two-dimentional Mark point mass center and three-dimensional Mark point mass center are respectively such as circular central and Figure 16 elliptical center in Figure 15 Black color dots shown in.
4.1.4, image coordinate converts Mark point reduction formula;
According to Mark point coordinate, the coordinate for being transformed into detection object in two dimensional image through coordinate is transformed into 3-D image, If the coordinate points position of converted obtained 3-D image is exactly detection object position, marked between plane camera and 3D camera It is fixed to complete.Calibration process is as follows:
Step1: 2D image Mark point coordinate (i is found out1,j1) and 2D image extrinsic region coordinate (Rows, Columns);
Step2: find out in 2D image the line-spacing of Mark point coordinate and 2D bird's nest extrinsic region coordinate and column away from:
Row=Rows-i1
Col=Columns-j1 (4.6)
Step3: the ratio between the ratio between image length between 2D image and 3D rendering and width are calculated:
(L2DLong, the L for 2D image3DIt is long for 3D rendering)
Step4: 3D bird's nest extrinsic region row coordinate and column coordinate are calculated using Mark point:
Row3m=KL*i2+Row
Colm3m=KW*j2+Col (4.8)
Steps5: obtained Row3m, Colm3m are exactly each feather region two-dimensional coordinate in 3-D image;
Steps6: each two-dimensional coordinate point corresponds to a fixed height value z in 3-D image, thus can be from 3D The three-dimensional coordinate (Row3m, Colm3m, Z) in each feather region is obtained in camera.
Here, one permission maximum deviation σ=0.5mm of setting.If finding out three-dimensional coordinate (Row3m, Colm3m, Z) permitting Perhaps in maximum deviation, then camera calibration is completed.Otherwise, distortion correction is carried out to camera lens again, then return step 1 is again Start to calculate, and due to allowing the precision of maximum deviation to be 0.1mm, error is calculated to reduce, result is recalculated in reservation Precision be 0.01mm.
4.2, experimental result;
4.2.1, Mark point center-of-mass coordinate;
Calculating two dimensional image Mark point and 3-D image Mark point mass center with mass center formula is respectively (see the table below 4-1):
Table 4-1, Mark point center-of-mass coordinate
Mark point Center-of-mass coordinate
Two-dimentional (i1, j1) (469.539,976.531)
Three-dimensional (i2, j2) (172.196,1352.08)
4.2.2, bird's nest extrinsic region center-of-mass coordinate;
Two empty array R and C are created, the row coordinate and column coordinate of storage feather region mass center are respectively intended to.
20 bird's nest feather extrinsic regions are detected in 2D bird's nest figure, shown in region center-of-mass coordinate following (R, C), In:
R=[1159.88,1179.29,1407.11,1712.31,1708.69,1752.07,2205.47,2275.48, 2215.88,2304.67,970.077,986.445,1005.75,1397.07,1445.52,1494.47,1534.48, 1568.59,1669.79,2103.62,2224.48];
C=[930.488,836.064,1724.77,857.697,1532.5,1157.49,1510.98,1 126.13, 1429.93,840.633,1156.68,1250.89,1079.05,707.473,417.415,557.804,274.456, 359.435,265.062,755.561,599.084];
4.2.3, bird's nest extrinsic region coordinate;
Two empty array Rows and Columns are created, the row coordinate and column coordinate of storage feather region mass center are respectively intended to.
20 bird's nest feather extrinsic regions are detected in 2D bird's nest figure, area coordinate is greater than 70,000, so mentioning below 50 coordinates in one of region are taken out, (Rows, Columns) as follows, in which:
Rows=[949,949,949,949,949,949,950,950,950,950,950,950,950,950,950, 950,950,950,950,950,950,951,951,951,951,951,951,951,951,951,951,951,951,951, 951,951,951,951,951,951,951,951,952,952,952,952,952,952,952,952];
Columns=[1138,1139,1171,1172,1173,1174,1135,1136,1137,1138,1139, 1140,1141,1170,1171,1172,1173,1174,1175,1176,1177,1135,1136,1137,1138,1139, 1140,1141,1142,1169,1170,1171,1172,1173,1174,1175,1176,1177,1178,1179,1180, 1181,1134,1135,1136,1137,1138,1139,1140,1141]
Wherein 3D bird's nest feather extrinsic region coordinate can be calculated according to Mark point reduction formula, create two empty numbers Group Rows3m and Columns3m is respectively intended to the row coordinate and column coordinate of the feather region mass center in storage 3D bird's nest figure.
4.2.4, bird's nest impurity 3D model;
Using the bird's nest feather extrinsic region 3D coordinate (Row3m, Colm3m, Z) obtained, generate as shown in figure 17 Bird's nest impurity 3D illustraton of model, wherein a is the bird's nest impurity 3D model original image generated, and b is 90 ° of figures of bird's nest impurity 3D model right-hand rotation, c Turn left 90 ° for bird's nest impurity 3D model and schemes.
As can be seen from Figure 17 the specific location and size of bird's nest impurity, facilitates bird's nest impurity to pick.But this The only impurity on bird's nest surface layer, bird's nest inside not detected without the exposed feather impurity on surface.So this can Impurity figure inside bird's nest is carried out in the method for combining the bird's nest on sorted surface layer recycling area array cameras and 3D camera As acquisition, above-mentioned same method then is carried out to image and is handled, bird's nest feather extrinsic region is sorted out.
4.3, experimental analysis;
The present invention has carried out relevant experiment to each process of detection algorithm for bird's nest feather impurities identification, through testing Verifying analysis, it was demonstrated that fusion 2D and the method detection accuracy of 3D rendering bird's nest impurity sorting are high, and performance can reach expected effect Fruit and actual bird's nest processing request.
The present invention can be improved the working efficiency of bird's nest feather impurity sorting, effectively reduce the production cost of bird's nest;Than The bird's nest Product Precision manually picked is high, and the bird's nest feather impurity that can carry out long-time stable picks work;Introduce the present invention The quality of bird's nest product can be improved after method, reduce the omission factor and false detection rate of bird's nest product, obtain reliable, stable and quasi- True detection bird's nest product;At the same time, the labor intensity of worker is reduced, the damage of worker's eyes, cervical vertebra and body and mind is reduced;Also Working efficiency can be improved, reduce labour cost, greatly enhance enterprise's competitiveness in the market.
Above-mentioned is the preferable embodiment of the present invention, but embodiments of the present invention are not limited by the foregoing content, His any changes, modifications, substitutions, combinations, simplifications made without departing from the spirit and principles of the present invention, should be The substitute mode of effect, is included within the scope of the present invention.

Claims (3)

1. a kind of bird's nest impurity method for sorting for merging 2D and 3D rendering, which is characterized in that include the following steps:
S1, the reconstruct and identification of bird's nest impurity;
S1.1, image preprocessing;
Image preprocessing, is a very important link in bird's nest identification process, when Image Acquisition be subjected to various noises and The interference of surrounding enviroment not only affects the effect of image, but also required relevant information is often submerged wherein, to subsequent Feature extraction make troubles;For filter out noise jamming, improving image quality, it is prominent needed for relevant information, to bird's nest impurity It needs to do relevant image preprocessing before identification and detection;
S1.1.1, image filtering;
In bird's nest image transmitting and treatment process, often by various noise pollutions, there is the dim spot of image or bright spot interference, dropping While low image quality, the accuracy of feature extraction in image procossing is had an effect on;Therefore, effective Image filter arithmetic is chosen Solving noise bring influences, and common Image filter arithmetic has: mean filter method and intermediate value in frequency domain filtering method, spatial domain Filter method;
The median filtering algorithm is a kind of neighborhood operation, and calculating process is when carrying out median filter process to image, by template On respective value by ascending sequence arrange, then the median of this column data is assigned to the pixel of template center position; Wherein, if template has odd number point, the gray value of the intermediary image vegetarian refreshments after sequence by size is arranged is as median;If template There is even number point, the gray value after sequence arranges by size is located in the middle the average value of two values as median;
It is too big to make edge blurry since median filtering effect depends on the size of filter window, it is too small, it denoises effect and pays no attention to Think, then improve to median filtering algorithm: progressive scanning picture judges whether the pixel is filter when handling each pixel Wave window covers the maximum value or minimum value of lower neighborhood territory pixel;If so, just handling the pixel using normal median filtering algorithm; If it is not, then not managing it;
Image filtering is carried out using improved 3 × 3 median filtering algorithm;
S1.1.2, image enhancement;
Enhance picture contrast using piecewise linear transform function, actually contrast between enhancing original image each section, that is, enhances Interested gray areas in input picture, it is opposite to inhibit those uninterested gray areas;
The form of the piecewise linear transform function is as follows:
Wherein, (x1,x2) and (y1,y2) it is major parameter in above formula (3.1), it is described according to algorithmic function, it is known that x1And x2It is It limits and deals with objects the grey level range that needs are converted, and y1And y2Then determine the slope of linear transformation;
S1.2, image segmentation;
Image segmentation uses thresholding method, is separated background and object by choosing optimal threshold, to carry out image point It cuts;Image judgement is carried out by one reasonable threshold value of setting, the gray value for meeting those of given threshold range part is set It is 0, otherwise is set as 1, so that interesting target be separated from image, and generates bianry image;
Threshold segmentation is that the input f of image is transformed to output g, is converted as follows:
In above formula, T is given threshold, and g (i, j)=0 indicates that the pictorial element of background parts, g (i, j)=1 indicate target object Partial pictorial element;Threshold segmentation is that is, scan its all pixels to image f, if f (i, j) >=T, the member of image after segmentation Plain g (i, j) is exactly the pixel of object;It otherwise is exactly background pixel;
S1.3, feature selecting and feather extrinsic region extract;
After obtaining various interested regions by image segmentation, it can use sub be used as of simple region description and represent the region Feature, and by these provincial characteristics be combined into feature vector for classification use;
Wherein, it is perimeter, area, compactness, the mass center in region, gray average, gray scale intermediate value, packet that the simple region, which describes son, Minimum rectangle, minimum or maximum gray scale containing region, pixel number and Euler's numbers more than or less than mean value;
S1.4, the positioning of bird's nest feather extrinsic region;
When to the feather impurity positioning identified, need to classify to it and correct;
S1.4.1, the classification of feather extrinsic region;
Judge that each feather extrinsic region mass center, can be miscellaneous by each feather to value d between self zone minimum range point using Euclidean distance Matter region is divided into two classes;Finding out feather extrinsic region center-of-mass coordinate by mass center formula (4.4), (4.5) is (R, C), then European Range formula is as follows:
If d=0, which belongs to the first kind, i.e. mass center is fallen in feather region;If d ≠ 0, which belongs to In the second class, i.e. mass center is fallen in outside feather region;
Wherein, it is exactly required to fall in feather region for first kind mass center, and the second class feather region then needs to be modified, and will fall in the Mass center obtains the new point (i fallen in feather region by correction algorithm outside two class feather regionsc,jc);
S1.4.2, mass center amendment;
In order to correct mass center outside feather region, the straight line for introducing minimum external oval major semiaxis extends algorithm;
S1.5,3D bird's nest impurity are rebuild;
It is to find two using space geometry coordinate transformation relation that area array cameras 2D image is registrated with the solid of depth camera 3D rendering The corresponding relationship of pixel coordinate between width image;First respectively to the bird's nest of the 2D bird's nest image of area array cameras and 3D camera The center-of-mass coordinate in the region image zooming-out Mark and the region Mark;Then feather is carried out to two width or multiple image of area array cameras again Impurity characteristics are extracted, and the center-of-mass coordinate of bird's nest feather extrinsic region is obtained;Then it is found out according to Mark point reduction formula matched Bird's nest feather impurity characteristics point region in 3D bird's nest image;The matching of image is finally completed, 3D model is generated;
S1.5.1, the acquisition and bird's nest impurity Model Reconstruction of bird's nest feather impurity point cloud data;
It is miscellaneous to generate specified bird's nest feather to 3D bird's nest original image by dividing accordingly according to the 3D bird's nest extrinsic region of generation Then its 3D bird's nest feather extrinsic region picture breakdown is X, Y comprising three-dimensional coordinate, Z coordinate point information by matter area image Image, and convert 3D bird's nest feather impure point cloud for three-dimensional point X, Y, Z-image, obtain the three of discrete bird's nest contaminant surface Dimensional feature point;In order to rebuild bird's nest contaminant surface, also trigonometric ratio is carried out to it, finally reconstruct the table of bird's nest feather impurity Face;
S1.5.2, the identification of bird's nest feather impurity characteristics;
The corresponding relationship between two breadth array camera bird's nest images and 3D camera bird's nest image is acquired using Mark point formula, is acquired All bird's nest extrinsic regions three dimensional representation of the point in 3D bird's nest image;
According to the 3D bird's nest extrinsic region of generation, 3D bird's nest original image is reduced to specified bird's nest feather extrinsic region image, so Afterwards again by 3D bird's nest feather extrinsic region picture breakdown be the x comprising 3D point, y, z coordinate image, wherein z image be height map Picture;Using z image as process object, find out 3D bird's nest feather impurity coordinate (x, y) with mass center formula, then it is every in 3-D image One two-dimensional coordinate point all corresponds to a fixed height value Z;Since the gray value of z image is exactly the height value Z of its image, first ask Out using center-of-mass coordinate in z image as the center of circle, taken respectively with bird's nest feather extrinsic region minimum annulus, i.e.+30 pictures of minimum annulus Gray value mean value Mean and deviation D eviation between the circle difference of+60 pixels of vegetarian refreshments and minimum annulus, can be used to lower public affairs Formula description:
Wherein, height Z are as follows:
Z=Mean-Mean1 (3.6)
So as to obtain the three-dimensional coordinate (Row3m, Colm3m, Z) in each feather region from 3D camera;
S2, experiment;
S2.1, camera calibration;
S2.1.1, distortion correction;
Camera lens used in usual vision system all exist to distort in various degree, and the excentric distance of pixel will affect image Distortion degree, range image center is closer, and distortion is just smaller, and this distortion belongs to non-linear type, can be described with following formula:
In above formula,The distortionless pixel ideal coordinates for meeting linear imaging model are represented, (x, y) represents practical figure Picpointed coordinate, δxAnd δyIt is nonlinear distortion value, position is related in the picture to picture point for it, it can be indicated with following formula:
Wherein, δxOr δyFirst item be Radial distortion radial distortion, Section 2 Centrifugal Distortion centrifugal distortion, Section 3 are the distortion of Thin prism thin prism, and the coefficient in formula is known as nonlinear distortion ginseng Number;And nonlinear distortion parameter can cause the unstable of solution when introducing excessive, influence precision raising;It therefore, can be by formula (4.2) Abbreviation are as follows:
Thus it is apparent that, distortion increases as radial radius increases, i.e., fractional distortion remote from picture centre is tighter Weight;
S2.1.2, Mark point are chosen;
By Mark point, detection object coordinate in two dimensional image is transformed into 3-D image, the three-dimensional coordinate after judging conversion It whether is detection object position to realize camera calibration target;
Image Acquisition is carried out using the mode that area array cameras and 3D camera combine, Mark point, which is chosen, has certain altitude, color Uniformly, the circle or rectangle of regular shape;
Mark point and detection object have the characteristics that gray scale and shape difference are obvious, first will be where Mark point with gray threshold method Region segmentation comes out, then is identified Mark point with feature extracting method;
S2.1.3, Mark point center-of-mass coordinate calculate;
The Mark point in two dimensional image and 3-D image is identified respectively using above-mentioned Mark point recognition methods, is then calculated respective Mark point center-of-mass coordinate;To a width 2D discretization digital picture, f (x, y) >=0, p+q rank square MpqWith central moment μpqIs defined as:
In above formula, (ic,jc) it is center-of-mass coordinate, and
Therefore, the Mark point center-of-mass coordinate of two dimensional image and 3-D image, respectively (i are found out by above-mentioned formula1,j1) and (i2, j2);
S2.1.4, image coordinate convert Mark point reduction formula;
According to Mark point coordinate, to be converted by coordinate, the coordinate of detection object is transformed into 3-D image in two dimensional image, if through The coordinate points position for the 3-D image being converted to is exactly detection object position, then has demarcated between plane camera and 3D camera At;
S2.2, experimental result;
S2.2.1, Mark point center-of-mass coordinate;
Two dimensional image Mark point and 3-D image Mark point mass center are calculated with mass center formula;
S2.2.2, bird's nest extrinsic region center-of-mass coordinate;
Two empty array R and C are created, the row coordinate and column coordinate of storage feather region mass center are respectively intended to;In 2D bird's nest figure Detect 20 bird's nest feather extrinsic regions, region center-of-mass coordinate is (R, C);
S2.2.3, bird's nest extrinsic region coordinate;
Two empty array Rows and Columns are created, the row coordinate and column coordinate of storage feather region mass center are respectively intended to;In 2D Detect that 20 bird's nest feather extrinsic regions, area coordinate are (Rows, Columns) in bird's nest figure;
Wherein 3D bird's nest feather extrinsic region coordinate can be calculated according to Mark point reduction formula, create two empty arrays Rows3m and Columns3m is respectively intended to the row coordinate and column coordinate of the feather region mass center in storage 3D bird's nest figure;
S2.2.4, bird's nest impurity 3D model;
Using the bird's nest feather extrinsic region 3D coordinate (Row3m, Colm3m, Z) obtained, bird's nest impurity 3D model is generated Figure;
It can be seen that the specific location and size of bird's nest impurity, facilitate bird's nest impurity to pick;But this is bird's nest surface layer Impurity, not detected without the exposed feather impurity on surface inside bird's nest;So can be by sorted table The method that the bird's nest recycling area array cameras and 3D camera of layer combine carries out impurity Image Acquisition inside bird's nest, then to image Above-mentioned same method processing is carried out, bird's nest feather extrinsic region is sorted out.
2. the bird's nest impurity method for sorting of fusion 2D and 3D rendering according to claim 1, which is characterized in that the step It is specific as follows to extend algorithmic procedure for the straight line of minimum external oval major semiaxis in S1.4.2:
Step1: seeking belonging to the minimum external ellipse in the second class feather region, so that it is minimum external oval long by half to obtain each region Axis a;
Step2: with mass center outside region (R, C) for starting point, mass center to region minimum range point (i1, j1) it is terminal, connection two o'clock obtains To line segment L;
Step3: with mass center to region minimum range point (i1, j1) it is starting point, make straight line L extended line, extending length is that a is obtained newly Straight line M;
Step4: seeking new straight line M and each feather region intersection point (m, n), calculates point (m, n) and mass center to region minimum range point (i1, j1) central point (p, q), the point be correct after required point.
3. the bird's nest impurity method for sorting of fusion 2D and 3D rendering according to claim 1, which is characterized in that the step Calibration process is specific as follows in S2.1.4:
Step1: 2D image Mark point coordinate (i is found out1,j1) and 2D image extrinsic region coordinate (Rows, Columns);
Step2: find out in 2D image the line-spacing of Mark point coordinate and 2D bird's nest extrinsic region coordinate and column away from:
Row=Rows-i1
Col=Columns-j1, (4.6)
Step3: the ratio between the ratio between image length between 2D image and 3D rendering and width are calculated:
(L2DLong, the L for 2D image3DIt is long for 3D rendering)
Step4: 3D bird's nest extrinsic region row coordinate and column coordinate are calculated using Mark point:
Row3m=KL*i2+Row
Colm3m=KW*j2+Col, (4.8)
Steps5: obtained Row3m, Colm3m are exactly each feather region two-dimensional coordinate in 3-D image;
Steps6: each two-dimensional coordinate point corresponds to a fixed height value z in 3-D image, so that it may from 3D camera To the three-dimensional coordinate (Row3m, Colm3m, Z) in each feather region;
Here, one permission maximum deviation σ=0.5mm of setting, if finding out three-dimensional coordinate (Row3m, Colm3m, Z) is allowing most In large deviation, then camera calibration is completed;Otherwise, distortion correction is carried out to camera lens again, then returns to Step1 and restarts It calculates, and due to allowing the precision of maximum deviation to be 0.1mm, error is calculated to reduce, retains the essence for recalculating result Degree is 0.01mm.
CN201910282067.1A 2019-04-09 2019-04-09 A kind of bird's nest impurity method for sorting merging 2D and 3D rendering Pending CN110176020A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910282067.1A CN110176020A (en) 2019-04-09 2019-04-09 A kind of bird's nest impurity method for sorting merging 2D and 3D rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910282067.1A CN110176020A (en) 2019-04-09 2019-04-09 A kind of bird's nest impurity method for sorting merging 2D and 3D rendering

Publications (1)

Publication Number Publication Date
CN110176020A true CN110176020A (en) 2019-08-27

Family

ID=67689512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910282067.1A Pending CN110176020A (en) 2019-04-09 2019-04-09 A kind of bird's nest impurity method for sorting merging 2D and 3D rendering

Country Status (1)

Country Link
CN (1) CN110176020A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113252710A (en) * 2021-06-16 2021-08-13 北京艾尚燕食品科技有限公司 Bird's nest component detection method and device
CN116721108A (en) * 2023-08-11 2023-09-08 山东奥晶生物科技有限公司 Stevioside product impurity detection method based on machine vision
CN116844142A (en) * 2023-08-28 2023-10-03 四川华腾公路试验检测有限责任公司 Bridge foundation scouring identification and assessment method

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184563A (en) * 2011-03-23 2011-09-14 华中科技大学 Three-dimensional scanning method, three-dimensional scanning system and three-dimensional scanning device used for plant organ form
CN103985155A (en) * 2014-05-14 2014-08-13 北京理工大学 Scattered point cloud Delaunay triangulation curved surface reconstruction method based on mapping method
US20160253807A1 (en) * 2015-02-26 2016-09-01 Mitsubishi Electric Research Laboratories, Inc. Method and System for Determining 3D Object Poses and Landmark Points using Surface Patches
TW201643811A (en) * 2015-01-09 2016-12-16 鴻海精密工業股份有限公司 System and method for merging point cloud data
US9578309B2 (en) * 2014-06-17 2017-02-21 Actality, Inc. Adjustable parallax distance, wide field of view, stereoscopic imaging system
CN106651882A (en) * 2016-12-29 2017-05-10 广东工业大学 Method and device for identifying and detecting cubilose impurities based on machine vision
CN107392956A (en) * 2017-06-08 2017-11-24 北京农业信息技术研究中心 Crop root Phenotypic examination method and apparatus
CN108022264A (en) * 2016-11-01 2018-05-11 狒特科技(北京)有限公司 Camera pose determines method and apparatus
CN108090960A (en) * 2017-12-25 2018-05-29 北京航空航天大学 A kind of Object reconstruction method based on geometrical constraint
CN108682033A (en) * 2018-05-29 2018-10-19 石河子大学 A kind of phase safflower filament two-dimensional image center in full bloom point extracting method
CN109544681A (en) * 2018-11-26 2019-03-29 西北农林科技大学 A kind of fruit three-dimensional digital method based on cloud
CN109544456A (en) * 2018-11-26 2019-03-29 湖南科技大学 The panorama environment perception method merged based on two dimensional image and three dimensional point cloud

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184563A (en) * 2011-03-23 2011-09-14 华中科技大学 Three-dimensional scanning method, three-dimensional scanning system and three-dimensional scanning device used for plant organ form
CN103985155A (en) * 2014-05-14 2014-08-13 北京理工大学 Scattered point cloud Delaunay triangulation curved surface reconstruction method based on mapping method
US9578309B2 (en) * 2014-06-17 2017-02-21 Actality, Inc. Adjustable parallax distance, wide field of view, stereoscopic imaging system
TW201643811A (en) * 2015-01-09 2016-12-16 鴻海精密工業股份有限公司 System and method for merging point cloud data
US20160253807A1 (en) * 2015-02-26 2016-09-01 Mitsubishi Electric Research Laboratories, Inc. Method and System for Determining 3D Object Poses and Landmark Points using Surface Patches
CN108022264A (en) * 2016-11-01 2018-05-11 狒特科技(北京)有限公司 Camera pose determines method and apparatus
CN106651882A (en) * 2016-12-29 2017-05-10 广东工业大学 Method and device for identifying and detecting cubilose impurities based on machine vision
CN107392956A (en) * 2017-06-08 2017-11-24 北京农业信息技术研究中心 Crop root Phenotypic examination method and apparatus
CN108090960A (en) * 2017-12-25 2018-05-29 北京航空航天大学 A kind of Object reconstruction method based on geometrical constraint
CN108682033A (en) * 2018-05-29 2018-10-19 石河子大学 A kind of phase safflower filament two-dimensional image center in full bloom point extracting method
CN109544681A (en) * 2018-11-26 2019-03-29 西北农林科技大学 A kind of fruit three-dimensional digital method based on cloud
CN109544456A (en) * 2018-11-26 2019-03-29 湖南科技大学 The panorama environment perception method merged based on two dimensional image and three dimensional point cloud

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王保云: "基于2维照片构建建筑物三维模型的研究", 《电子技术与软件工程》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113252710A (en) * 2021-06-16 2021-08-13 北京艾尚燕食品科技有限公司 Bird's nest component detection method and device
CN113252710B (en) * 2021-06-16 2021-09-24 北京艾尚燕食品科技有限公司 Bird's nest component detection method and device
CN113866196A (en) * 2021-06-16 2021-12-31 北京艾尚燕食品科技有限公司 Bird's nest composition detecting system
CN116721108A (en) * 2023-08-11 2023-09-08 山东奥晶生物科技有限公司 Stevioside product impurity detection method based on machine vision
CN116721108B (en) * 2023-08-11 2023-11-03 山东奥晶生物科技有限公司 Stevioside product impurity detection method based on machine vision
CN116844142A (en) * 2023-08-28 2023-10-03 四川华腾公路试验检测有限责任公司 Bridge foundation scouring identification and assessment method
CN116844142B (en) * 2023-08-28 2023-11-21 四川华腾公路试验检测有限责任公司 Bridge foundation scouring identification and assessment method

Similar Documents

Publication Publication Date Title
CN105261017B (en) The method that image segmentation based on road surface constraint extracts pedestrian's area-of-interest
CN103164692B (en) A kind of special vehicle instrument automatic identification system based on computer vision and method
CN109447945B (en) Quick counting method for basic wheat seedlings based on machine vision and graphic processing
CN107784669A (en) A kind of method that hot spot extraction and its barycenter determine
CN110210477B (en) Digital instrument reading identification method
CN107437068B (en) Pig individual identification method based on Gabor direction histogram and pig body hair mode
CN108186051B (en) Image processing method and system for automatically measuring double-apical-diameter length of fetus from ultrasonic image
CN110176020A (en) A kind of bird's nest impurity method for sorting merging 2D and 3D rendering
CN109559324A (en) A kind of objective contour detection method in linear array images
CN112750121B (en) System and method for detecting digital image quality of pathological slide
CN108564092A (en) Sunflower disease recognition method based on SIFT feature extraction algorithm
CN110348263A (en) A kind of two-dimensional random code image recognition and extracting method based on image recognition
CN109977899B (en) Training, reasoning and new variety adding method and system for article identification
CN111898627B (en) SVM cloud microparticle optimization classification recognition method based on PCA
CN106296670A (en) A kind of Edge detection of infrared image based on Retinex watershed Canny operator
CN106327451A (en) Image restorative method of ancient animal fossils
CN109492645A (en) A kind of registration number character dividing method and device
CN108830857A (en) A kind of adaptive Chinese character rubbings image binaryzation partitioning algorithm
CN109241948A (en) A kind of NC cutting tool visual identity method and device
JP4747122B2 (en) Specific area automatic extraction system, specific area automatic extraction method, and program
CN108961301A (en) It is a kind of based on the unsupervised Chaetoceros image partition method classified pixel-by-pixel
CN110648312A (en) Method for identifying wool and cashmere fibers based on scale morphological characteristic analysis
Radhiyah et al. Comparison study of Gaussian and histogram equalization filter on dental radiograph segmentation for labelling dental radiograph
CN105205485B (en) Large scale image partitioning algorithm based on maximum variance algorithm between multiclass class
US10115195B2 (en) Method and apparatus for processing block to be processed of urine sediment image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190827