CN107239748A - Robot target identification and localization method based on gridiron pattern calibration technique - Google Patents

Robot target identification and localization method based on gridiron pattern calibration technique Download PDF

Info

Publication number
CN107239748A
CN107239748A CN201710342229.7A CN201710342229A CN107239748A CN 107239748 A CN107239748 A CN 107239748A CN 201710342229 A CN201710342229 A CN 201710342229A CN 107239748 A CN107239748 A CN 107239748A
Authority
CN
China
Prior art keywords
mtd
mtr
image
msub
mrow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710342229.7A
Other languages
Chinese (zh)
Inventor
沈梦娇
梁志伟
姜燕
黄校娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201710342229.7A priority Critical patent/CN107239748A/en
Publication of CN107239748A publication Critical patent/CN107239748A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention discloses the identification of the robot target based on gridiron pattern calibration technique and localization method, the localization method is divided into target identification and target positions two parts, wherein, target identification extracts effective target by the way that robot acquired image is analyzed and handled;Gridiron pattern calibration technique is applied in target positioning, and precise positioning goes out position of the effective target on court.Target identification of the present invention and localization method, robot can accurately recognize target under environment complicated and changeable and carry out precise positioning to it, do not influenceed by factors such as intensity of illumination, noise and the dressings of outside audience at scene, drastically increase the target identification and setting accuracy of robot.

Description

Robot target identification and localization method based on gridiron pattern calibration technique
Technical field
The present invention relates to robot target identification and localization method, and in particular to the robot based on gridiron pattern calibration technique Target identification and localization method, belong to object recognition and detection technical field.
Background technology
The research of Soccer robot is developed rapidly in recent years so that the problem of this multi-crossed disciplines is by more Carry out more concerns.There is provided more simplify as the prediction project that RoboCup SPL compete for the match of NAO robotic golfs But highly important research platform.In NAO robots golf project, the real time environment of golf course is complicated and changeable, Intensity of illumination, noise and the dressing of outside audience at scene etc. all can cause significant impact to the normal work of NAO robots. How robot, which can accurately recognize target under environment complicated and changeable and carry out precise positioning to it, seems extremely important.
The content of the invention
The technical problems to be solved by the invention are:The robot target identification based on gridiron pattern calibration technique is provided with determining Position method, greatly improves the target identification and positioning precision of robot.
The present invention uses following technical scheme to solve above-mentioned technical problem:
Robot target identification and localization method based on gridiron pattern calibration technique, comprise the following steps:
Step 1, two camera cameras above and below robot head setting, two camera cameras are located at same straight line On, gather target object image scene using any one camera camera;
Step 2, in target object institute at the scene, target object color characteristic is obtained by the way of scene obtains rgb value Threshold value, and determine by off-line training the threshold range of target object color characteristic;
Step 3, target object image scene is split using the partitioning algorithm based on threshold value, extracts effective target special Levy area image;
Step 4, preliminary treatment is carried out to effective target feature regional images using median filtering algorithm;
Step 5, the image after step 4 preliminary treatment is divided into several size identical block of pixels, to each picture Pixel in plain block is scanned from left to right, from top to bottom, judges whether the rgb value of the pixel obtains in step 2 In threshold range, if, then it is assumed that the color of the pixel is target object color characteristic, obtains whole pixels of target object Point information;
Step 6, smooth, the image after obtaining smoothly is carried out to the image obtained through step 5 using gaussian filtering method;
Step 7, the image after smooth under RGB color is converted to the image under hsv color space;
Step 8, rim detection is carried out to the image under hsv color space using Canny edge detection algorithms, be identified Effective target image;
Step 9, using the upper left corner of effective target image as the origin of coordinates, on the upper left corner of the image two intersecting sides point Not as u axles and v axles, the pixel coordinate system of effective target image is set up;
Step 10, the pixel coordinate system of effective target image is changed to world coordinate system, according to pixel coordinate system and generation Transformational relation between boundary's coordinate system calculates the intrinsic parameter and outer parameter of camera, the final positional information for obtaining target object.
As a preferred embodiment of the present invention, in Canny edge detection algorithms described in step 8, the gradient width of pixel Value is with angle calculation formula:
θ (x, y)=tan-1(GY(x,y)/GX(x, y)),
Wherein, G (x, y) is the gradient magnitude of pixel (x, y), and θ (x, y) is the angle of pixel (x, y), GX(x,y) For the gradient of pixel (x, y) in the X direction, GY(x, y) is the gradient of pixel (x, y) in the Y direction.
As a preferred embodiment of the present invention, the pixel coordinate system of effective target image is changed to generation described in step 10 Boundary's coordinate system detailed process is as follows:
1) pixel coordinate system of effective target image is changed to image coordinate system;
2) image coordinate system is changed to camera coordinates system;
3) camera coordinates system is changed to world coordinate system.
As a preferred embodiment of the present invention, the pixel coordinate system, which is changed to the conversion formula of image coordinate system, is:
Wherein, x, y are respectively that image coordinate fastens coordinate points in x-axis, the value of y-axis, and u, v are respectively that pixel coordinate fastens seat Punctuate is in the value of u axles, v axles, u0、v0Respectively value of the pixel coordinate system origin in u axles, v axles.
As a preferred embodiment of the present invention, described image coordinate system, which is changed to the conversion formula of camera coordinates system, is:
Wherein, x, y are respectively that image coordinate fastens coordinate points in x-axis, the value of y-axis, and f is camera coordinates system origin and image The distance between coordinate origin, xc、yc、zcRespectively camera coordinates fasten coordinate points in xcAxle, ycAxle, zcThe value of axle.
As a preferred embodiment of the present invention, the camera coordinates system, which is changed to the conversion formula of world coordinate system, is:
Wherein, xc、yc、zcRespectively camera coordinates fasten coordinate points in xcAxle, ycAxle, zcThe value of axle, R is spin matrix, t For translation matrix, xw、yw、zwCoordinate points are fastened in x for world coordinateswAxle, ywAxle, zwThe value of axle.
The present invention uses above technical scheme compared with prior art, with following technique effect:
Target identification of the present invention and localization method, robot can accurately recognize target and right under environment complicated and changeable It carries out precise positioning, is not influenceed, greatly carried by factors such as intensity of illumination, noise and the dressings of outside audience at scene The target identification and setting accuracy of Gao Liao robots.
Brief description of the drawings
Fig. 1 is NAO robots camera schematic diagram of the present invention, wherein, (a) is side view, and (b) is rearview.
Fig. 2 is that picture is taken the photograph by NAO robots of the present invention.
Fig. 3 is RGB color illustraton of model.
Fig. 4 is hsv color spatial model figure.
Fig. 5 is the image under hsv color space.
Fig. 6 is the effective target that Canny edge detection algorithms are obtained.
Fig. 7 is the effective target image that target identification is obtained.
Fig. 8 is the geometrical relationship figure of 4 coordinate system imagings.
Fig. 9 is the mapping between image coordinate system and camera coordinates system.
Figure 10 (a), (b) is the demarcation picture of different angle shots respectively.
Embodiment
Embodiments of the present invention are described below in detail, the example of the embodiment is shown in the drawings.Below by The embodiment being described with reference to the drawings is exemplary, is only used for explaining the present invention, and is not construed as limiting the claims.
The present invention mainly introduces a kind of target identification and precise positioning method used in game of golf, is discussed in detail Target identification module and target locating module in this method, target identification that this method greatly improves robot are accurate Degree.Localization method proposed by the present invention mainly has target identification module to be cooperated with target locating module completion.Target identification module Effective target is extracted, the final precise positioning of combining target locating module goes out the accurate location of effective target.
NAO robots mainly use its vision system to recognize target object while perceiving ball in game of golf Field environment.Two cameras are arranged vertically above and below robot head, it is possible to provide resolution ratio is 640*480 YUV422 images, And the frame speed of 30 frame per second may insure the real-time of image, shown in such as Fig. 1 (a) and (b).In the localization method, target Identification module extracts effective target by the way that robot acquired image is analyzed and handled, and is positioned in conjunction with target Module, using gridiron pattern calibration technique, precise positioning goes out position of the effective target on court.
1st, target identification
The picture that robot is gathered to itself carries out appropriate analysis and processing, and the process for obtaining effective target is target Being completed needed for identification module for task.Its detailed operation flow is as follows:
The pretreatment of 1.1 images
The camera of NAO robots has certain limitation, during the games, and acquired image information exists certain Noise and distortion.The accuracy of these uncertain factor strong influences effective target identification, it is to obtain higher-quality Image information to these noises and distortion, it is necessary to carry out denoising and correcting process, image preprocessing is mainly used in suppressing useless letter Breath, strengthens useful information and improves picture quality.Preconditioning technique used in the present invention is described below:
1) determination of color threshold
Under the conditions of different intensities of illumination and different image-forming ranges, the color of object can have certain deviation, obtain Metastable color threshold is obtained, is the key for obtaining effective target.In order to save robot pre-games debug time, the present invention is adopted The mode that rgb value is obtained with scene determines target object color to obtain the color threshold of target object, then by off-line training Feature is with respect to stable threshold.Simplify the calculation process of threshold value, improve processing speed, greatly promote match performance.
2) image is split
Robot, which was collected, utilizes image Segmentation Technology extraction effective target characteristic area after image information.In NAO robots In vision system, characteristic area is effective target in court, and the present invention is carried effective target using the partitioning algorithm based on threshold value Take out, and realize follow-up detailed identification work on this basis.
3) picture noise is handled
A large amount of salt-pepper noises occur in image information after splitting through image, and salt-pepper noise is brought to the accurate processing of image It is many difficult, directly affect feature extraction and image recognition.What the present invention was produced after being split using median filtering algorithm to image Spiced salt noise carries out preliminary treatment.
The identification of 1.2NAO effective targets
In pre-games debug time, NAO robots have obtained the color threshold of court effective target, with golf yellow flag Exemplified by bar, determine to meet the yellow pixel point in the threshold range obtained in pretreatment in the pictorial information that robot is obtained. NAO recognizes that the main thought of flagpole is:Yellow target area is found on court, and by a series of image analysis processing, If the region highlighted is rectangle, and this rectangle is identical with the flagpole ratio of artwork, then it is assumed that have identified target flagpole. For 640 × 480 original image information (as shown in Figure 2) acquired in NAO robots camera, following image point is carried out Analysis is handled, to recognize effective target.
1) picture pixels point information is read
It is the square pixels block that image is divided into 32 × 24 by 20 pixel grids with width.In RGB color, by background Pixel syntax values are set to (0,0,0), and yellow pixel point grammer codomain is set to (15,120,120) to (30,255,255). The pictorial information of acquisition is reconstructed, composition standard two-dimensional counts array count [32] [24], 32*24 two-dimemsional number Group just represents a block of pixels, and array element just represents pixel.After image preprocessing, to every a line, each pixel is clicked through Row scanning, when meeting the yellow threshold value of real racetrack place acquisition, it is a yellow pixel point to be considered as the pixel, works as figure After scanned, that is, whole pixel information of effective target are obtained, judge the region as yellow flag by NAO robots Bar.
2) gaussian filtering smoothed image
Gaussian filtering is a kind of linear smoothing filtering, is widely used in the noise abatement process in image procossing.In simple terms, Gauss Filtering is exactly that average process is weighted to entire image, highly effective to the noise of suppression Normal Distribution.Specific behaviour Work is that the weighted average gray value of pixel in each pixel with a convolution scan image, the field determined with convolution goes to replace For the value of convolution central pixel point.
3) color space conversion
RGB color and hsv color space are all the widely used color systems of current image procossing.RGB is by red The letter abbreviations of color, green and blue three primary colours, by the different degrees of superposition of three primary colours, to produce various different face Color, can cover all colours that human eyesight can perceive, RGB color illustraton of model as shown in figure 3, the model is image In the RGB of each pixel distribute 0-255 gray value, therefore, the full color of RGB image has 16581375 kinds of face Color.The pixel color value of RGB color can be expressed as (Red, Green, Blue), and wherein white is (255,255,255), Black is (0,0,0), and yellow is (111,111,111).
HSV (Hue, Saturation, Value) color space is a kind of color sky created according to the intuitive nature of color Between, illustraton of model is as shown in Figure 4.H parameters represent the position of color information, i.e. residing spectral color, one angular metric of the parameter To represent, red, green, blue is separated by 120 degree respectively, and complementary colours differs 180 degree respectively.Purity S be a ratio value, scope from 0 to 1, It is expressed as the ratio between the purity of selected color and the maximum purity of the color, there was only gray scale during S=0.V represents color Light levels, scope is from 0 to 1.Conical tip is meaningless corresponding to V=0, H and S value, and this point represents black, circular conical surface center Place, S=0, V=1, H values are meaningless, represent white.
In real racetrack, intensity of illumination is inevitable interference factor, and RGB color is influenceed by intensity of illumination It is inadequate to the resolution of object than larger, it could even be possible to causing robot " blindness ".Abundant experimental results show, for same The object of one color, under the irradiation of different illumination intensity or different light source, its RGB color Distribution value it is very discrete, this So that RGB color threshold is difficult to determine, it is easy to which noise is included, or recognized object is lost, eventually led Cause during application match, the target Loss brought by the uneven illumination in place.And hsv color space can't be with The change of intensity of illumination and change, influence of the illumination condition to robot vision is reduced to a certain extent, machine is enhanced The adaptive ability of the vision system of device people.The present invention carries out colour recognition and processing using hsv color space, and Fig. 5 is conversion For the image behind hsv color space.
4) Canny rim detections
Canny edge detection algorithms are to be calculated by John F.Canny one kind developed based on image gradient for 1986 Edge detection algorithm, be one of method for detecting image edge classic algorithm.Classical Canny edge detection algorithms are generally all Since Gaussian smoothing, realize that edge connection terminates to based on dual threshold.Gaussian smoothing is mainly for reducing picture noise, favorably In the gradient and edge amplitude that more accurately calculate image.The present invention uses 2*2 Sobel operators, and its mathematic(al) representation is as follows:
Gx(x,y)≈[S(x,y+1)-S(x+1,y+1)-S(x+1,y)]/2 (1)
Gy(x,y)≈[S(x,y)-S(x+1,y)-S(x+1,y+1)]/2 (2)
Wherein Gx(x, y) is the gradient in X-direction, Gy(x, y) is the gradient in Y-direction.Can according to the gradient of X and Y-direction To calculate gradient magnitude and angle of the image in the pixel:
θ (x, y)=tan-1(Gy(x,y)/Gx(x,y)) (4)
Wherein G (x, y) is the size of gradient magnitude, the i.e. gray scale, and θ (x, y) is the angle of the point.According to anti-triangle letter Number angle values scope beFor the ease of calculating, added on angle valueSo that angle value scope is at 0 ° Between~180 °.
After the edge amplitude and angle of each pixel of image is obtained, non-peak signal compacting is carried out.Main purpose It is to realize edge thinning, is handled by the step, edge pixel is further reduced.After non-peak signal compacting, still have a small amount of Non-edge pixels be incorporated into result, so will be by taking threshold value to be accepted or rejected.Canny proposes to be based on dual threshold (Fuzzy Threshold) method realizes edge selection well, in actual applications, and dual threshold also has the effect that edge is connected.Dual threashold Value selection with edge connection method by assuming that two threshold values one of them be high threshold TH, another is Low threshold TL, then Have:
A. it is less than TL then discarding for any edge pixel;
B. it is higher than TH then reservation for any edge pixel;
C. for any edge pixel values between TL and TH, if a pixel can be connected to by edge more than TH And edge all pixels are more than minimum threshold TL then reservation, otherwise abandon.
As a result display such as Fig. 6.The effective target image finally recognized is shown in Fig. 7.
2nd, target is positioned
NAO robots are after target object is recognized accurately, the method precise positioning target object demarcated using grid, greatly Improve robot accuracy greatly.From after coming out one after another Tsai and Zhang classic paper, camera calibration be considered as one into Ripe technology, the object in space is reduced using the image of shot by camera.Based on this, NAO robots are by using head The corresponding tessellated situation that portion's video camera is shot is accurately positioned out the position data of target object.The algorithm of gridiron pattern standardization It is described as:
1) print a template and be affixed on ground;2) shoot from different perspectives template photo several;3) detect in image Characteristic point;4) intrinsic parameter and outer parameter of camera are obtained.
By such process, obtain and join outside high-precision 4 internal references and 6, using these information, final realization is three-dimensional Information recovering, reaches the purpose of precise positioning.
The geometrical model of 2.1 camera imagings
Image shot by camera is actually the process of an optical imagery.This process can be divided into 3 steps, be under the jurisdiction of 4 Individual coordinate system, this 3 steps connect the picture pixels point coordinates of shooting and actual locus coordinate.Four coordinates System is respectively:
1) pixel coordinate system:The origin of coordinates is located at the upper left corner for taking the photograph image, and u axles and v axles are respectively parallel to the plane of delineation Two vertical edges.Coordinate value is represented with (u, v), is discrete integer value.
2) image coordinate system:The origin of coordinates is located at the centre for taking the photograph image, and x-axis and y-axis are respectively parallel to pixel coordinate system U axles and v axles, coordinate value with (x, y) represent.
3) camera coordinates system (photocentre coordinate system):The origin of coordinates is the photocentre of camera, xcAxle, ycAxle is respectively parallel to image The x-axis and y-axis of coordinate system, the optical axis of camera is zcAxle, coordinate value (xc,yc,zc) represent.
4) world coordinate system:The coordinate system that robot is selected according to natural environment, coordinate value (xw,yw,zw) represent.
3 steps are:
1) information in pixel coordinate system is changed to image coordinate system respectively;
2) image coordinate system is changed to camera coordinates system;
3) camera coordinates system is changed to world coordinate system.
The demarcation of camera first has to be chosen to the geometrical model of picture, so that it is determined that inside and outside parameter, finally obtains target location Coordinate information.In order to describe imaging geometry model that this patent used, it is necessary to by above-mentioned 4 coordinate systems.What is be imaged is several What relation is shown in Fig. 8.
4 coordinate system transformational relations are:
1) pixel coordinate system (u, v) is changed to image coordinate system (x, y)
The central point for choosing the plane of delineation is O1, O1Coordinate value in pixel coordinate system is (u0,v0), according to coordinate system Conversion understands O1Coordinate value of the point in image coordinate system beWherein dx, dy represent that each pixel exists respectively Physical size in x-axis and y-axis.Then the homogeneous coordinates transformational relation for obtaining pixel planes and the plane of delineation is:
2) image coordinate system (x, y) is changed to camera coordinates system (xc,yc,zc)
Known according to the property of camera, camera coordinates system origin O and image coordinate system origin O1Line OO1As camera Focal length f, the Intrinsic Matrix of cameraPoint Pc(xc,yc,zc) project in image coordinate system be P points, by Similar triangle theory is obtained:
Mapping relations between image coordinate system and camera coordinates system are as shown in figure 9, event transformational relation is:
3) camera coordinates system (xc,yc,zc) change to world coordinate system (xw,yw,zw)
Therebetween conversion this to be related to two kinds of conversion:Translation and rotation.Specific transformational relation is as follows, wherein R =[r1,r2,r3] be 3 × 3 spin matrix, t represents the translation matrix of 3-dimensional column vector.
Joint (1) (2) (3) obtains the transformational relation between pixel coordinate system and world coordinate system, has:
Demarcation thing is plane again, world coordinates series structure in zwIt is i.e. available in=0 plane:
The change is the singly change of reflecting property, i.e., one plane to the mapping of another plane, square defined in computer vision Battle array H=[h1 h2 h3]=A [r1 r2T] it is referred to as singly reflecting property matrix.
The calibration algorithm of 2.2 inside and outside parameters
(xw,yw) for the coordinate of demarcation thing, can be known quantity by designer's manual control, (u, v) is pixel coordinate, can be with Directly obtained by video camera.For one group of corresponding (xw,yw) → (u, v), by equation [h1 h2 h3]=A [r1 r2T], knot The property of spin matrix is closed, two constraintss can be obtained.Therefore 4 characteristic points of detection, 8 equations can be obtained.
Internal reference matrix A containsu0, v0Spin matrix R in four unknown quantitys, outer ginseng matrix containsThree Individual unknown quantity, translation matrix t contains xw,yw,zwThree unknown quantitys.By changing the relative position between video camera and demarcation thing Two photos are obtained, and internal reference matrix immobilizes, outer ginseng matrix changes with the change of positional information, therefore can produce 16 unknown quantitys.Equally, two photos can obtain 8 characteristic points, 16 equations.6*2+4=8*2, is thus solved all unknown Amount, completes the demarcation of inside and outside parameter, the final positional information for obtaining target object.
Figure 10 (a) and (b) is respectively the difference captured by present invention application camera calibration technology completion precise positioning The picture of angle.Global calibration parametric solution the results are shown in Table 1 and table 2.
The inner parameter of the global calibration of table 1
The external parameter of table 2 (a) global calibration
The external parameter of table 2 (b) global calibration
The technological thought of above example only to illustrate the invention, it is impossible to which protection scope of the present invention is limited with this, it is every According to technological thought proposed by the present invention, any change done on the basis of technical scheme each falls within the scope of the present invention Within.

Claims (6)

1. robot target identification and localization method based on gridiron pattern calibration technique, it is characterised in that comprise the following steps:
Step 1, two camera cameras above and below robot head setting, two camera cameras are located on the same line, Target object image scene is gathered using any one camera camera;
Step 2, in target object institute at the scene, the threshold of target object color characteristic is obtained by the way of scene obtains rgb value Value, and determine by off-line training the threshold range of target object color characteristic;
Step 3, target object image scene is split using the partitioning algorithm based on threshold value, extracts effective target characteristic area Area image;
Step 4, preliminary treatment is carried out to effective target feature regional images using median filtering algorithm;
Step 5, the image after step 4 preliminary treatment is divided into several size identical block of pixels, to each block of pixels In pixel be scanned from left to right, from top to bottom, judge the pixel rgb value whether the threshold value obtained in step 2 In the range of, if, then it is assumed that the color of the pixel is target object color characteristic, obtains whole pixels letter of target object Breath;
Step 6, smooth, the image after obtaining smoothly is carried out to the image obtained through step 5 using gaussian filtering method;
Step 7, the image after smooth under RGB color is converted to the image under hsv color space;
Step 8, rim detection is carried out to the image under hsv color space using Canny edge detection algorithms, what is be identified has Imitate target image;
Step 9, using the upper left corner of effective target image as the origin of coordinates, in the upper left corner of the image, two intersecting sides are made respectively For u axles and v axles, the pixel coordinate system of effective target image is set up;
Step 10, the pixel coordinate system of effective target image is changed to world coordinate system, sat according to pixel coordinate system and the world Transformational relation between mark system calculates the intrinsic parameter and outer parameter of camera, the final positional information for obtaining target object.
2. the robot target based on gridiron pattern calibration technique is recognized and localization method according to claim 1, its feature exists In in Canny edge detection algorithms described in step 8, gradient magnitude and the angle calculation formula of pixel are:
<mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <mrow> <msubsup> <mi>G</mi> <mi>X</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msubsup> <mi>G</mi> <mi>Y</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </msqrt> <mo>,</mo> </mrow>
θ (x, y)=tan-1(GY(x,y)/GX(x, y)),
Wherein, G (x, y) is the gradient magnitude of pixel (x, y), and θ (x, y) is the angle of pixel (x, y), GX(x, y) is pixel The gradient of point (x, y) in the X direction, GY(x, y) is the gradient of pixel (x, y) in the Y direction.
3. the robot target based on gridiron pattern calibration technique is recognized and localization method according to claim 1, its feature exists In the pixel coordinate system of effective target image being changed described in step 10 as follows to world coordinate system detailed process:
1) pixel coordinate system of effective target image is changed to image coordinate system;
2) image coordinate system is changed to camera coordinates system;
3) camera coordinates system is changed to world coordinate system.
4. the robot target based on gridiron pattern calibration technique is recognized and localization method according to claim 3, its feature exists In the pixel coordinate system, which is changed to the conversion formula of image coordinate system, is:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>x</mi> </mtd> </mtr> <mtr> <mtd> <mi>y</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>d</mi> <mi>x</mi> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mi>d</mi> <mi>x</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>d</mi> <mi>y</mi> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>v</mi> <mn>0</mn> </msub> <mi>d</mi> <mi>y</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>u</mi> </mtd> </mtr> <mtr> <mtd> <mi>v</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, x, y are respectively that image coordinate fastens coordinate points in x-axis, the value of y-axis, and u, v are respectively that pixel coordinate fastens coordinate points In the value of u axles, v axles, u0、v0Respectively value of the pixel coordinate system origin in u axles, v axles.
5. the robot target based on gridiron pattern calibration technique is recognized and localization method according to claim 3, its feature exists In described image coordinate system, which is changed to the conversion formula of camera coordinates system, is:
<mrow> <msub> <mi>z</mi> <mi>c</mi> </msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>x</mi> </mtd> </mtr> <mtr> <mtd> <mi>y</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>f</mi> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>f</mi> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>c</mi> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, x, y are respectively that image coordinate fastens coordinate points in x-axis, the value of y-axis, and f is camera coordinates system origin and image coordinate It is the distance between origin, xc、yc、zcRespectively camera coordinates fasten coordinate points in xcAxle, ycAxle, zcThe value of axle.
6. the robot target based on gridiron pattern calibration technique is recognized and localization method according to claim 3, its feature exists In the camera coordinates system, which is changed to the conversion formula of world coordinate system, is:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>R</mi> </mtd> <mtd> <mi>t</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>w</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>w</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>w</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, xc、yc、zcRespectively camera coordinates fasten coordinate points in xcAxle, ycAxle, zcThe value of axle, R is spin matrix, and t is flat Move matrix, xw、yw、zwCoordinate points are fastened in x for world coordinateswAxle, ywAxle, zwThe value of axle.
CN201710342229.7A 2017-05-16 2017-05-16 Robot target identification and localization method based on gridiron pattern calibration technique Pending CN107239748A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710342229.7A CN107239748A (en) 2017-05-16 2017-05-16 Robot target identification and localization method based on gridiron pattern calibration technique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710342229.7A CN107239748A (en) 2017-05-16 2017-05-16 Robot target identification and localization method based on gridiron pattern calibration technique

Publications (1)

Publication Number Publication Date
CN107239748A true CN107239748A (en) 2017-10-10

Family

ID=59985138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710342229.7A Pending CN107239748A (en) 2017-05-16 2017-05-16 Robot target identification and localization method based on gridiron pattern calibration technique

Country Status (1)

Country Link
CN (1) CN107239748A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230401A (en) * 2018-01-12 2018-06-29 上海鼎盛汽车检测设备有限公司 3D four-wheel position finder automatic camera calibration method and system
CN108563220A (en) * 2018-01-29 2018-09-21 南京邮电大学 The motion planning of apery Soccer robot
CN108596942A (en) * 2018-03-21 2018-09-28 黄启萌 A kind of system and method precisely judging ball drop point using single camera
CN108628808A (en) * 2018-04-04 2018-10-09 华南农业大学 The coordinate transformation method of camera sampled point
CN109087341A (en) * 2018-06-07 2018-12-25 华南农业大学 A kind of fusion method of short distance EO-1 hyperion camera and distance measuring sensor
CN109397294A (en) * 2018-12-05 2019-03-01 南京邮电大学 A kind of robot cooperated localization method based on BA-ABC converged communication algorithm
CN109472829A (en) * 2018-09-04 2019-03-15 顺丰科技有限公司 A kind of object positioning method, device, equipment and storage medium
CN109895238A (en) * 2018-12-20 2019-06-18 中铁十四局集团房桥有限公司 The automatic installation method and system of III type track plate pre-buried sleeve of CRTS-
CN110186459A (en) * 2019-05-27 2019-08-30 深圳市海柔创新科技有限公司 Air navigation aid, mobile vehicle and navigation system
CN110378970A (en) * 2019-07-08 2019-10-25 武汉理工大学 A kind of monocular vision deviation detecting method and device for AGV
WO2019206247A1 (en) * 2018-04-27 2019-10-31 Shanghai Truthvision Information Technology Co., Ltd System and method for camera calibration
CN110666811A (en) * 2019-09-26 2020-01-10 同济大学 RoboCup standard platform group-based ball position prediction method
CN111652069A (en) * 2020-05-06 2020-09-11 天津博诺智创机器人技术有限公司 Target identification and positioning method of mobile robot
CN111798524A (en) * 2020-07-14 2020-10-20 华侨大学 Calibration system and method based on inverted low-resolution camera
CN111932627A (en) * 2020-09-15 2020-11-13 蘑菇车联信息科技有限公司 Marker drawing method and system
CN112033408A (en) * 2020-08-27 2020-12-04 河海大学 Paper-pasted object space positioning system and positioning method
CN112184807A (en) * 2020-09-22 2021-01-05 深圳市衡泰信科技有限公司 Floor type detection method and system for golf balls and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175261A (en) * 2011-01-10 2011-09-07 深圳大学 Visual measuring system based on self-adapting targets and calibrating method thereof
US20160035079A1 (en) * 2011-07-08 2016-02-04 Restoration Robotics, Inc. Calibration and Transformation of a Camera System's Coordinate System
CN106127737A (en) * 2016-06-15 2016-11-16 王向东 A kind of flat board calibration system in sports tournament is measured
CN106251337A (en) * 2016-07-21 2016-12-21 中国人民解放军空军工程大学 A kind of drogue space-location method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175261A (en) * 2011-01-10 2011-09-07 深圳大学 Visual measuring system based on self-adapting targets and calibrating method thereof
US20160035079A1 (en) * 2011-07-08 2016-02-04 Restoration Robotics, Inc. Calibration and Transformation of a Camera System's Coordinate System
CN106127737A (en) * 2016-06-15 2016-11-16 王向东 A kind of flat board calibration system in sports tournament is measured
CN106251337A (en) * 2016-07-21 2016-12-21 中国人民解放军空军工程大学 A kind of drogue space-location method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘琳 等: "轮廓特征与神经网络相结合的行人检测", 《光电工程》 *
刘禾: "《数字图像处理及应用》", 31 December 2005, 北京:中国电力出版社 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230401A (en) * 2018-01-12 2018-06-29 上海鼎盛汽车检测设备有限公司 3D four-wheel position finder automatic camera calibration method and system
CN108563220A (en) * 2018-01-29 2018-09-21 南京邮电大学 The motion planning of apery Soccer robot
CN108596942A (en) * 2018-03-21 2018-09-28 黄启萌 A kind of system and method precisely judging ball drop point using single camera
CN108628808A (en) * 2018-04-04 2018-10-09 华南农业大学 The coordinate transformation method of camera sampled point
CN112106110A (en) * 2018-04-27 2020-12-18 上海趋视信息科技有限公司 System and method for calibrating camera
US11468598B2 (en) 2018-04-27 2022-10-11 Shanghai Truthvision Information Technology Co., Ltd. System and method for camera calibration
WO2019206247A1 (en) * 2018-04-27 2019-10-31 Shanghai Truthvision Information Technology Co., Ltd System and method for camera calibration
CN109087341A (en) * 2018-06-07 2018-12-25 华南农业大学 A kind of fusion method of short distance EO-1 hyperion camera and distance measuring sensor
CN109087341B (en) * 2018-06-07 2022-07-05 华南农业大学 Fusion method of close-range hyperspectral camera and ranging sensor
CN109472829A (en) * 2018-09-04 2019-03-15 顺丰科技有限公司 A kind of object positioning method, device, equipment and storage medium
CN109472829B (en) * 2018-09-04 2022-10-21 顺丰科技有限公司 Object positioning method, device, equipment and storage medium
CN109397294A (en) * 2018-12-05 2019-03-01 南京邮电大学 A kind of robot cooperated localization method based on BA-ABC converged communication algorithm
CN109895238A (en) * 2018-12-20 2019-06-18 中铁十四局集团房桥有限公司 The automatic installation method and system of III type track plate pre-buried sleeve of CRTS-
CN110186459B (en) * 2019-05-27 2021-06-29 深圳市海柔创新科技有限公司 Navigation method, mobile carrier and navigation system
CN110186459A (en) * 2019-05-27 2019-08-30 深圳市海柔创新科技有限公司 Air navigation aid, mobile vehicle and navigation system
CN110378970B (en) * 2019-07-08 2023-03-10 武汉理工大学 Monocular vision deviation detection method and device for AGV
CN110378970A (en) * 2019-07-08 2019-10-25 武汉理工大学 A kind of monocular vision deviation detecting method and device for AGV
CN110666811A (en) * 2019-09-26 2020-01-10 同济大学 RoboCup standard platform group-based ball position prediction method
CN110666811B (en) * 2019-09-26 2022-08-05 同济大学 RoboCup standard platform group-based ball position prediction method
CN111652069A (en) * 2020-05-06 2020-09-11 天津博诺智创机器人技术有限公司 Target identification and positioning method of mobile robot
CN111652069B (en) * 2020-05-06 2024-02-09 天津博诺智创机器人技术有限公司 Target identification and positioning method for mobile robot
CN111798524A (en) * 2020-07-14 2020-10-20 华侨大学 Calibration system and method based on inverted low-resolution camera
CN111798524B (en) * 2020-07-14 2023-07-21 华侨大学 Calibration system and method based on inverted low-resolution camera
CN112033408B (en) * 2020-08-27 2022-09-30 河海大学 Paper-pasted object space positioning system and positioning method
CN112033408A (en) * 2020-08-27 2020-12-04 河海大学 Paper-pasted object space positioning system and positioning method
CN111932627B (en) * 2020-09-15 2021-01-05 蘑菇车联信息科技有限公司 Marker drawing method and system
CN111932627A (en) * 2020-09-15 2020-11-13 蘑菇车联信息科技有限公司 Marker drawing method and system
CN112184807A (en) * 2020-09-22 2021-01-05 深圳市衡泰信科技有限公司 Floor type detection method and system for golf balls and storage medium
WO2022062153A1 (en) * 2020-09-22 2022-03-31 深圳市衡泰信科技有限公司 Golf ball floor type detection method, system, and storage medium
CN112184807B (en) * 2020-09-22 2023-10-03 深圳市衡泰信科技有限公司 Golf ball floor type detection method, system and storage medium

Similar Documents

Publication Publication Date Title
CN107239748A (en) Robot target identification and localization method based on gridiron pattern calibration technique
CN105894499B (en) A kind of space object three-dimensional information rapid detection method based on binocular vision
CN107203973B (en) Sub-pixel positioning method for center line laser of three-dimensional laser scanning system
CN104091324B (en) Quick checkerboard image feature matching algorithm based on connected domain segmentation
CN110232389B (en) Stereoscopic vision navigation method based on invariance of green crop feature extraction
CN104835164B (en) A kind of processing method and processing device of binocular camera depth image
CN107907048A (en) A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning
CN106123772B (en) A kind of nuclear fuel rod pose automatic identification equipment and method
CN109961485A (en) A method of target positioning is carried out based on monocular vision
CN108731587A (en) A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
CN110033407B (en) Shield tunnel surface image calibration method, splicing method and splicing system
CN109523551B (en) Method and system for acquiring walking posture of robot
CN104760812B (en) Product real-time positioning system and method on conveyer belt based on monocular vision
CN109685913A (en) Augmented reality implementation method based on computer vision positioning
CN109767473A (en) A kind of panorama parking apparatus scaling method and device
CN111739031A (en) Crop canopy segmentation method based on depth information
CN109961399A (en) Optimal stitching line method for searching based on Image distance transform
CN107977996A (en) Space target positioning method based on target calibrating and positioning model
CN108074265A (en) A kind of tennis alignment system, the method and device of view-based access control model identification
CN109308702A (en) A kind of real-time recognition positioning method of target
CN108171753A (en) Stereoscopic vision localization method based on centroid feature point Yu neighborhood gray scale cross correlation
CN110648362A (en) Binocular stereo vision badminton positioning identification and posture calculation method
CN110210292A (en) A kind of target identification method based on deep learning
CN114820817A (en) Calibration method and three-dimensional reconstruction method based on high-precision line laser 3D camera
CN110909571B (en) High-precision face recognition space positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171010

RJ01 Rejection of invention patent application after publication