CN108550169A - The computational methods of the determination of pieces of chess position and its height in three dimensions - Google Patents

The computational methods of the determination of pieces of chess position and its height in three dimensions Download PDF

Info

Publication number
CN108550169A
CN108550169A CN201810374622.9A CN201810374622A CN108550169A CN 108550169 A CN108550169 A CN 108550169A CN 201810374622 A CN201810374622 A CN 201810374622A CN 108550169 A CN108550169 A CN 108550169A
Authority
CN
China
Prior art keywords
coordinate system
chess
under
image
chess piece
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810374622.9A
Other languages
Chinese (zh)
Other versions
CN108550169B (en
Inventor
韩燮
孙福盛
赵融
郭晓霞
贾彩琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North University of China
Original Assignee
North University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North University of China filed Critical North University of China
Priority to CN201810374622.9A priority Critical patent/CN108550169B/en
Publication of CN108550169A publication Critical patent/CN108550169A/en
Application granted granted Critical
Publication of CN108550169B publication Critical patent/CN108550169B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The problems such as present invention relates particularly to the location determination of pieces of chess in a kind of three dimensions and the computational methods of height, solve in chess robot, and the complexity of chess piece location determination inaccuracy and monocular camera calculating chess piece height method is high.Pieces of chess method for determining position includes the spatial position for calculating chess piece and the distance away from camera:First image is pre-processed, it is split respectively according to chess piece text color, operation is done to the image after segmentation, determine the pixel coordinate in the chess piece center of circle, further according to the calibrating parameters of color camera, calculate chess piece spatial position, finally the spatial point under color camera coordinate system is matched with the pixel under depth image coordinate system, extraction cromogram middle finger fixation vegetarian refreshments the depth of field and calculate chess piece to camera actual range;The computational methods of pieces of chess height:By the calibration twice of color camera, and under camera coordinates system and world coordinate system same vector relationship, the height of chess piece is calculated by the variation of camera heights.

Description

The computational methods of the determination of pieces of chess position and its height in three dimensions
Technical field
The present invention relates to a kind of chess robots, belong to machine vision and image processing field.It is specific to propose one kind three The computational methods of the determination of pieces of chess position and its height in dimension space.The method can also be applied to rule single in space The then calculating of object height.
Background technology
It is widely applied with the fast development of robot technology and its in every profession and trade, machine vision is answered robot Increasingly important role is played in.In chess robot, to the meter of the determination draw in chess sub- height of chess piece position in space It is also a vital part in chess robot to calculate.
Determination for space chess piece position, in existing pawn image processing method, because of extraneous environmental factor (illumination, other object interference etc.) have a great impact to the extraction of chess piece, so, after extracting chess piece, it is necessary to through excessive Secondary image preprocessing, can just dispel noise spot, processing step repeats and various, and uncertain factor increases, and is also increased by The process of image procossing and time.
Calculating for chess piece height, mostly use greatly binocular vision or extraction the methods of three dimensions point cloud to object into Row measure, these methods there are algorithm complexity height, buy camera price it is high the problems such as.
Invention content
In order to solve the technical problems existing in the prior art, the technical solution adopted by the present invention is divided into following two parts:
First, pieces of chess method for determining position in three dimensions:Include the following steps:
Step 1, the image of spatial position pieces of chess to be determined is acquired, image preprocessing first is carried out to the image of acquisition, Further according to the difference of text color on pieces of chess, according to the meaning of each component in hsv color model, determine red, green on chess piece H, S, V range of word are respectively split red, green chess piece, obtain two width bianry images;
Step 2, two width bianry images after respectively dividing to red, green chess piece carry out linear fusion, two width bianry images After fusion, the noise spot generated in each comfortable extraction can mutually be covered, obtain the no noise spot of a pair and include it is red, The bianry image of green chess piece;
Because red vivider, red pawns are influenced in extraction by environmental disturbances small, are adapted to absolutely mostly The environment of number variation;Green is dark, and the main noise in extraction comes from the pixel for having red pawns, if light becomes Secretly, then the noise spot of red pawns position will be continuously increased.At this point, the image after being extracted to green chess piece is located in advance When reason, processing procedure can become cumbersome, and expected effect is not achieved.So directly the two images after extraction are melted It closes, in this way, noise spot present in green extraction figure will be covered by the red position extracted in figure where red pawns, reduces Later image pretreated process, and expected extraction effect can be reached.
Step 3, dilation operation is carried out to the bianry image after fusion, connects adjacent element in image, make to extract Region where each chess piece becomes single, orthogonal connected domain;
Step 4, contours extract is carried out to the chess piece in image after expansion, and draws the circumscribed circle of respective profile, by outer Connect the position that circle determines the center of circle, i.e. position of 32 chess pieces under image coordinate system;
Step 5, plane scaling board being positioned over where 32 chess piece upper surfaces, then pass through the calibration function in OpenCV CalibrateCamera calculates the inside and outside ginseng (H of the color camera of Kinectrgb,Rrgb,Trgb), then by pinhole camera model The conversion relational expression of image coordinate system and world coordinate system further calculates position of the chess piece under world coordinate system, i.e. chess piece Position in space;
Step 6, plane scaling board being positioned over where 32 chess piece upper surfaces, passes through the calibration function in OpenCV CalibrateCamera calculates the inside and outside ginseng (H of the depth camera of Kinectd, Rd, Td);
Because the infrared camera of Kinect is with depth camera in identical position, acquisition uncalibrated image can only use infrared phase Machine, therefore infrared camera is demarcated namely to the calibration of depth camera;It is not every in the image of infrared camera acquisition Width figure can be carried out the extraction of angle point, it can be influenced by object material, cause Corner Detection invalid, so needing to acquire More image just can make the calibration of infrared camera more accurate, it is proposed that the amount of images of acquisition is at 20-25;
Step 7, according to the point (W under world coordinate systemp) with color camera coordinate system under point (Kp) transformational relation:Kp =Rrgb*Wp+TrgbAnd point (the W under world coordinate systemp) with depth camera coordinate system under point (Kdp) transformational relation:Kdp= Rd*Wp+Td, the relationship between Kinect color cameras coordinate system and depth camera coordinate system is calculated, further according to depth camera mark Internal reference after fixed, the spatial point under depth camera coordinate system is transformed under depth pixel coordinate system, realizes color camera seat The matching of spatial point and the depth pixel point under depth image coordinate system under mark system;According to above-mentioned matching process, by colored phase Chess piece position under machine coordinate system is transformed under depth image coordinate system, calculates the coordinate of chess piece under depth coordinate system;
Step 8, according to the storage mode of depth of field data in the depth map of Kinect acquisitions, to obtained depth pixel coordinate The lower chess piece coordinate of system carries out the extraction of depth of view information, obtains the center of circle of chess piece upper surface in space to the places sensor plane Vertical range d;
Step 9, the coordinate according to the chess piece obtained in step 7 under depth camera coordinate system, by the coordinate projection to deeply It spends on the XOY plane of camera coordinates system, the distance d1 of calculating subpoint to the coordinate origin, in conjunction with what is obtained in step 8 Distance d, according to Pythagorean theorem D2=d12+d2, calculate the center of circle of chess piece upper surface in space to the actual range D of sensor, i.e., Complete the determination to pieces of chess position in three dimensions.
In the step 1, the image of position pieces of chess to be determined is acquired, first carry out image to the image of acquisition locates in advance Reason, further according to the difference of text color on pieces of chess, according to the meaning of each component in hsv color model, determine on chess piece it is red, H, S, V range of green word, step include:
2.1, using the method for perspective transform, space correction is carried out to the pieces of chess image of acquisition, by image flame detection at The form of orthographic projection;
2.2, ROI is arranged to the image after perspective transform;
2.3, it is determined according to the meaning of each component in hsv color model further according to the difference of text color on pieces of chess Each component value of H, S, V of chess piece red, green word is:The value range of red H, S, V are respectively 1-15,60-255,0-93, The value range of green H, S, V distinguish 0-95,0-255,0-93.
First, space correction is carried out to the image of acquisition, using the method for perspective transform, remains to keep to project on image-bearing surface Geometric figure is constant, makes form of the image flame detection at orthographic projection;Secondly, in order to eliminate when ambient enviroment extracts chess piece need not It interferes, reduces the time of later image processing, increases the positioning accuracy of chess piece coordinate, therefore the image after perspective transform is set Set ROI.By above-mentioned perspective transform and setting ROI, the source images of acquisition are processed into the image of only chess piece and chessboard, under The preparation of early period is carried out in the segmentation and space orientation of one step chess piece.
Because hsv color model is conducive to be split specified color, chess piece is divided using HSV models It cuts.According to the difference of text color on pieces of chess, according to the meaning of each component in hsv color model, determine red, green on chess piece H, S, V range of word are respectively split red, green chess piece, obtain two width bianry images.
In the step 7, according to the point (W under world coordinate systemp) with color camera coordinate system under point (Kp) conversion close System:Kp=Rrgb*Wp+TrgbAnd point (the W under world coordinate systemp) with depth camera coordinate system under point (Kdp) transformational relation: Kdp=Rd*Wp+Td, the relationship between Kinect color cameras coordinate system and depth camera coordinate system is calculated, further according to depth phase The calibrated internal reference of machine, the spatial point under depth camera coordinate system is transformed under depth pixel coordinate system, realizes colored phase The matching of spatial point and the depth pixel point under depth image coordinate system under machine coordinate system, specific matching process are as follows:
Point (W under world coordinate systemp) with color coordinate under point (Kp) transformational relation be:Kp=Rrgb*Wp+Trgb1. Point (W under world coordinate systemp) with depth camera coordinate system under point (Kdp) transformational relation be:Kdp=Rd*Wp+Td2. by 1. Formula obtains:
It 3. formula will bring 2. formula into, obtain:
Point K under color camera coordinate systempWith the point K under depth camera coordinate systemdpMeet:Kdp=R*Kp+ T 5., Middle R, T are the spin matrix and translation vector between two coordinates;
In conjunction with 4. 5. formula, obtain:
Again will 6. 7. 5. formula substitutes into, you can point under color camera coordinate system and the point under depth camera coordinate system turn Change relationship:
Again by KdpIt is multiplied by the internal reference of depth camera, just obtains depth map corresponding with color camera coordinate system down space point As the pixel under coordinate system, the depth under the spatial point to depth image coordinate system under color camera coordinate system is thereby realized Spend the matching of pixel.
Second, in three dimensions pieces of chess height computational methods, include the following steps:
Step 1, the image of spatial position pieces of chess to be determined is acquired, image preprocessing first is carried out to the image of acquisition, Further according to the difference of text color on pieces of chess, according to the meaning of each component in hsv color model, determine red, green on chess piece H, S, V range of word are respectively split red, green chess piece, obtain two width bianry images;
Step 2, two width bianry images after respectively dividing to red, green chess piece carry out linear fusion, two width bianry images After fusion, the noise spot generated in each comfortable extraction can mutually be covered, obtain the no noise spot of a pair and include it is red, The bianry image of green chess piece.Because without noise spot, the speed of later image processing is substantially increased, image is simplified The step of processing.
Because red vivider, red pawns are influenced in extraction by environmental disturbances small, are adapted to absolutely mostly The environment of number variation;Green is dark, and the main noise in extraction comes from the pixel for having red pawns, if light becomes Secretly, then the noise spot of red pawns position will be continuously increased.At this point, the image after being extracted to green chess piece is located in advance When reason, processing procedure can become cumbersome, and expected effect is not achieved.So directly the two images after extraction are melted It closes, in this way, noise spot present in green extraction figure will be covered by the red position extracted in figure where red pawns, reduces Later image pretreated process, and expected extraction effect can be reached.
Step 3, dilation operation is carried out to the bianry image after fusion, connects adjacent element in image, make to extract Region where each chess piece becomes single, orthogonal connected domain;
Step 4, contours extract is carried out to the chess piece in image after expansion, and draws the circumscribed circle of respective profile, by outer Connect the position that circle determines the center of circle, i.e. position of 32 chess pieces under image coordinate system;
Step 5, plane scaling board being positioned over where 32 chess piece upper surfaces, then pass through the calibration function in OpenCV CalibrateCamera calculates the inside and outside ginseng (H of the color camera of Kinectrgb,Rrgb,Trgb), then by pinhole camera model The conversion relational expression of image coordinate system and world coordinate system further calculates position of the chess piece under world coordinate system, i.e. chess piece Position in space;
Step 6, it according to world coordinate system to the transformational relation of color camera coordinate system, obtains 32 under color camera coordinate system The three-dimensional coordinate of a chess piece one of chooses under the coordinate system coordinate points A of chess piece, does A points to color camera coordinate system The vector of origin O
Step 7, according to the spin matrix R of color camera coordinate system and world coordinate systemrgb, will be vectorial
It is transformed under world coordinate systemI.e.The world coordinates of known selected point A, further according to Vector1 (the X of position of Kinect color cameras under the world coordinate system can uniquely be acquired1, Y1, Z1), Z1As camera arrives The vertical range of plane where chess piece upper surface;
Step 8, plane scaling board being positioned over where chessboard surface, acquires the scene image, and two are carried out to color camera Secondary calibration (Hrgb,Rrgb,Trgb), then select any point B on scaling board, according to world coordinate system in camera pinhole model and Point B under world coordinate system is transformed under camera coordinates system by the conversion relational expression of camera coordinates system, and does B points to colored phase The vector of machine coordinate origin O
Step 9, according to the spin matrix R of color camera coordinate system after secondary calibration and world coordinate systemrgb, will be vectorialIt is transformed under world coordinate systemI.e.The world coordinates of known selected point B, further according to vector2 (the X of position of Kinect color cameras under the world coordinate system can uniquely be acquired2, Y2, Z2), i.e. camera to board surface The vertical range of place plane is Z2
Step 10, according to formula h=z2-z1, h is the actual height of pieces of chess.
In the step 1, the image of position pieces of chess to be determined is acquired, first carry out image to the image of acquisition locates in advance Reason, further according to the difference of text color on pieces of chess, according to the meaning of each component in hsv color model, determine on chess piece it is red, H, S, V range of green word, step include:
2.1, using the method for perspective transform, space correction is carried out to the pieces of chess image of acquisition, by image flame detection at The form of orthographic projection;
2.2, ROI is arranged to the image after perspective transform;
2.3, it is determined according to the meaning of each component in hsv color model further according to the difference of text color on pieces of chess Each component value of H, S, V of chess piece red, green word is:The value range of red H, S, V are respectively 1-15,60-255,0-93, The value range of green H, S, V distinguish 0-95,0-255,0-93.
First, space correction is carried out to the image of acquisition, using the method for perspective transform, remains to keep to project on image-bearing surface Geometric figure is constant, makes form of the image flame detection at orthographic projection;Secondly, in order to eliminate when ambient enviroment extracts chess piece need not It interferes, reduces the time of later image processing, increases the positioning accuracy of chess piece coordinate, therefore the image after perspective transform is set Set ROI.By above-mentioned perspective transform and setting ROI, the source images of acquisition are processed into the image of only chess piece and chessboard, under The preparation of early period is carried out in the segmentation and space orientation of one step chess piece.
Because hsv color model is conducive to be split specified color, chess piece is divided using HSV models It cuts.According to the difference of text color on pieces of chess, according to the meaning of each component in hsv color model, determine red, green on chess piece H, S, V range of word are respectively split red, green chess piece, obtain two width bianry images.
1- steps 10 through the above steps are accurately obtained the reality of pieces of chess.
Pieces of chess method for determining position in three dimensions proposed by the present invention can effectively improve the speed of image procossing Degree simplifies the pretreated flow of image to the method for noise spot processing, improves the precision of chess piece location determination;In three dimensions Pieces of chess method for determining height is applicable to the elevation carrection of single regular object, reduces algorithm complexity and camera The expense of buying can be widely used;The combination of above two method improves manipulator and captures accurately to chess piece Degree.
Description of the drawings
Fig. 1 is the flow chart of the present invention;
Fig. 2 is perspective transform figure;
Fig. 3 is ROI figures;
Fig. 4 is red pawns extraction figure;
Fig. 5 is green chess piece extraction figure;
Fig. 6 is linear fusion figure;
Fig. 7 is dilation operation figure;
Fig. 8 is chess piece pixel coordinate figure.
Specific implementation mode
Two methods of the present invention are described in further detail with reference to the accompanying drawings and examples.
First, pieces of chess method for determining position in three dimensions, includes the following steps:
Step 1, the image of spatial position pieces of chess to be determined is acquired, image preprocessing first is carried out to the image of acquisition, First, using the method for perspective transform, space correction is carried out to the pieces of chess image of acquisition, by image flame detection at orthographic projection Form, such as Fig. 2;Secondly, ROI, such as Fig. 3 are arranged to the image after perspective transform;Finally, according to text color on pieces of chess Difference, according to the meaning of each component in hsv color model, by the comparison to different illumination conditions, varying environment, it is determined that chess Each component value of H, S, V of red, the green word of son is:The value range of red H, S, V are respectively 1-15,60-255,0-93, green H, the value range of S, V distinguish 0-95,0-255,0-93.Red, green chess piece is split respectively, obtains two width bianry images, Such as Fig. 4,5;
Step 2, two width bianry images after respectively dividing to red, green chess piece carry out linear fusion, two width bianry images After fusion, the noise spot generated in each comfortable extraction can mutually be covered, obtain the no noise spot of a pair and include it is red, The bianry image of green chess piece, such as Fig. 6;
Step 3, dilation operation is carried out to the bianry image after fusion, connects adjacent element in image, make to extract Region where each chess piece becomes single, orthogonal connected domain.In dilation operation, it is located at center using reference point The core of 10*10, such as Fig. 7;
Step 4, contours extract is carried out to the chess piece in image after expansion, and draws the circumscribed circle of respective profile, by outer Meet the position that circle determines the center of circle, i.e. position of 32 chess pieces under image coordinate system, such as Fig. 8;
Step 5, plane scaling board being positioned over where 32 chess piece upper surfaces, then pass through the calibration function in OpenCV CalibrateCamera calculates the inside and outside ginseng (H of the color camera of Kinectrgb,Rrgb,Trgb), then by pinhole camera model The conversion relational expression of image coordinate system and world coordinate system further calculates position of the chess piece under world coordinate system, i.e. chess piece Position in space;
Step 6, plane scaling board being positioned over where 32 chess piece upper surfaces, passes through the calibration function in OpenCV CalibrateCamera calculates the inside and outside ginseng (H of the depth camera of Kinectd, Rd, Td);
It is not the extraction that every width figure can be carried out angle point, it can be by object in the image of infrared camera acquisition The influence of body material causes Corner Detection invalid, so needing to acquire more image, just can make the calibration of infrared camera more It is accurate to add, and the amount of images of acquisition is at 25;
Step 7, according to the point (W under world coordinate systemp) with color camera coordinate system under point (Kp) transformational relation:Kp =Rrgb*Wp+TrgbAnd point (the W under world coordinate systemp) with depth camera coordinate system under point (Kdp) transformational relation:Kdp= Rd*Wp+Td, the relationship between Kinect color cameras coordinate system and depth camera coordinate system is calculated, further according to depth camera mark Internal reference after fixed, the spatial point under depth camera coordinate system is transformed under depth pixel coordinate system, realizes color camera seat The matching of spatial point and the depth pixel point under depth image coordinate system under mark system;According to above-mentioned matching process, by colored phase Chess piece position under machine coordinate system is transformed under depth image coordinate system, calculates the coordinate of chess piece under depth coordinate system;
Spatial point and the matching process of the depth pixel point under depth image coordinate system under color camera coordinate system is as follows:
Point (W under world coordinate systemp) with color coordinate under point (Kp) transformational relation be:Kp=Rrgb*Wp+Trgb1. Point (W under world coordinate systemp) with depth camera coordinate system under point (Kdp) transformational relation be:Kdp=Rd*Wp+Td2. by 1. Formula obtains:
It 3. formula will bring 2. formula into, obtain:
Point K under color camera coordinate systempWith the point K under depth camera coordinate systemdpMeet:Kdp=R*Kp+ T 5., Middle R, T are the spin matrix and translation vector between two coordinates;
In conjunction with 4. 5. formula, obtain:
Again will 6. 7. 5. formula substitutes into, you can point under color camera coordinate system and the point under depth camera coordinate system turn Change relationship:
Again by KdpIt is multiplied by the internal reference of depth camera, just obtains depth map corresponding with color camera coordinate system down space point As the pixel under coordinate system, the depth under the spatial point to depth image coordinate system under color camera coordinate system is thereby realized Spend the matching of pixel.
Step 8, according to the storage mode of depth of field data in the depth map of Kinect acquisitions, to obtained depth pixel coordinate The lower chess piece coordinate of system carries out the extraction of depth of view information, obtains the center of circle of chess piece upper surface in space to the places sensor plane Vertical range d;
The depth information of Kinect is stored in the figure of 16bits, and preceding 13bits is depth, and rear 3bits is index value, When nobody occurs in image, 16 can be directly extracted, range information is obtained;
Step 9, the coordinate according to the chess piece obtained in step 7 under depth camera coordinate system, by the coordinate projection to deeply It spends on the XOY plane of camera coordinates system, the distance d1 of calculating subpoint to the coordinate origin, in conjunction with what is obtained in step 8 Distance d, according to Pythagorean theorem D2=d12+d2, calculate the center of circle of chess piece upper surface in space to the actual range D of sensor, i.e., Complete the determination to pieces of chess position in three dimensions, by step 1- steps 9, obtain the position of chess piece in space and Accurate distance D of the chess piece away from Kinect thermal cameras, such as following table.
Chess piece to sensor actual range in the image that infrared camera acquires, be not that every width figure can be carried out The extraction of angle point, it can be influenced by object material, cause Corner Detection invalid, so needing to acquire more image, The calibration of infrared camera can be made more accurate, it is proposed that the amount of images of acquisition is at 20-25.Second, the calculating of chess piece height Method:
Step 1, the image of spatial position pieces of chess to be determined is acquired, image preprocessing first is carried out to the image of acquisition, First, using the method for perspective transform, space correction is carried out to the pieces of chess image of acquisition, by image flame detection at orthographic projection Form, such as Fig. 2;Secondly, ROI, such as Fig. 3 are arranged to the image after perspective transform;Finally, according to text color on pieces of chess Difference, according to the meaning of each component in hsv color model, by different illumination conditions, varying environment to this experiment, determine Each component value of H, S, V of chess piece red, green word is:The value range of red H, S, V are respectively 1-15,60-255,0-93, The value range of green H, S, V distinguish 0-95,0-255,0-93.Red, green chess piece is split respectively, obtains two width binary maps Picture, such as Fig. 4,5;
Step 2, two width bianry images after respectively dividing to red, green chess piece carry out linear fusion, two width bianry images After fusion, the noise spot generated in each comfortable extraction can mutually be covered, obtain the no noise spot of a pair and include it is red, The bianry image of green chess piece, such as Fig. 6;
Step 3, dilation operation is carried out to the bianry image after fusion, connects adjacent element in image, make to extract Region where each chess piece becomes single, orthogonal connected domain.In dilation operation, it is located at center using reference point The core of 10*10, such as Fig. 7;
Step 4, contours extract is carried out to the chess piece in image after expansion, and draws the circumscribed circle of respective profile, by outer Meet the position that circle determines the center of circle, i.e. position of 32 chess pieces under image coordinate system, such as Fig. 8;
Step 5, plane scaling board being positioned over where 32 chess piece upper surfaces, then pass through the calibration function in OpenCV CalibrateCamera calculates the inside and outside ginseng (H of the color camera of Kinectrgb,Rrgb,Trgb), then by pinhole camera model The conversion relational expression of image coordinate system and world coordinate system further calculates position of the chess piece under world coordinate system, i.e. chess piece Position in space;
Step 6, it according to world coordinate system to the transformational relation of color camera coordinate system, obtains 32 under color camera coordinate system The three-dimensional coordinate of a chess piece one of chooses under the coordinate system coordinate points A (124,68,0) of chess piece, does A points to colour The vector of camera coordinates system origin O
Step 7, according to the spin matrix R of color camera coordinate system and world coordinate systemrgb, will be vectorialIt is transformed into generation Under boundary's coordinate systemI.e.The world coordinates of known selected point A, further according to vectorIt can uniquely ask The position 1 (- 56,287,874) of Kinect color cameras under the world coordinate system is obtained, i.e., plane where camera to chess piece upper surface Vertical range be 874mm;
Step 8, plane scaling board being positioned over where chessboard surface, acquires the scene image, and two are carried out to color camera Secondary calibration (Hrgb,Rrgb,Trgb), any point B (0,0,0) on scaling board is then selected, is sat according to the world in camera pinhole model The conversion relational expression of mark system and camera coordinates system, the point B under world coordinate system is transformed under camera coordinates system, and do B points and arrive The vector of color camera coordinate origin O
Step 9, according to the spin matrix R of color camera coordinate system after secondary calibration and world coordinate systemrgb, will be vectorialIt is transformed under world coordinate systemI.e.The world coordinates of known selected point B, further according to vectorThe position 2 (- 87,209,886) of Kinect color cameras under the world coordinate system, i.e. camera to chessboard can uniquely be acquired The vertical range of plane where surface is 886mm;
Step 10, according to formula h=z2-z1, h is the actual height of pieces of chess, and the actual height of the chess piece of calculating is 12mm。

Claims (5)

1. the method for pieces of chess location determination in three dimensions, it is characterised in that:Include the following steps:
Step 1, the image of spatial position pieces of chess to be determined is acquired, image preprocessing, then root first are carried out to the image of acquisition According to the difference of text color on pieces of chess, according to the meaning of each component in hsv color model, red, green word on chess piece is determined H, S, V range, red, green chess piece is split respectively, obtains two width bianry images;
Step 2, two width bianry images after respectively dividing to red, green chess piece carry out linear fusion, and two width bianry images are melting After conjunction, the noise spot generated in each comfortable extraction can mutually be covered, obtain a secondary no noise spot and include red, green chess The bianry image of son;
Each of step 3, dilation operation is carried out to the bianry image after fusion, connects adjacent element in image, make to extract Region where chess piece becomes single, orthogonal connected domain;
Step 4, contours extract is carried out to the chess piece in image after expansion, and draws the circumscribed circle of respective profile, pass through circumscribed circle Determine the position in the center of circle, i.e. position of 32 chess pieces under image coordinate system;
Step 5, plane scaling board being positioned over where 32 chess piece upper surfaces, then pass through the calibration function in OpenCV CalibrateCamera calculates the inside and outside ginseng (H of the color camera of Kinectrgb,Rrgb,Trgb), then by pinhole camera model The conversion relational expression of image coordinate system and world coordinate system further calculates position of the chess piece under world coordinate system, i.e. chess piece Position in space;
Step 6, plane scaling board being positioned over where 32 chess piece upper surfaces, passes through the calibration function in OpenCV CalibrateCamera calculates the inside and outside ginseng (H of the depth camera of Kinectd, Rd, Td);
Step 7, according to the point (W under world coordinate systemp) with color camera coordinate system under point (Kp) transformational relation:Kp= Rrgb*Wp+TrgbAnd point (the W under world coordinate systemp) with depth camera coordinate system under point (Kdp) transformational relation:Kdp=Rd* Wp+Td, the relationship between Kinect color cameras coordinate system and depth camera coordinate system is calculated, is demarcated further according to depth camera Spatial point under depth camera coordinate system is transformed under depth pixel coordinate system by internal reference afterwards, realizes color camera coordinate The matching of spatial point and the depth pixel point under depth image coordinate system under system;According to above-mentioned matching process, by color camera Chess piece position under coordinate system is transformed under depth image coordinate system, calculates the coordinate of chess piece under depth coordinate system;
Step 8, according to the storage mode of depth of field data in the depth map of Kinect acquisitions, under obtained depth pixel coordinate system Chess piece coordinate carry out the extraction of depth of view information, obtain the center of circle of chess piece upper surface in space to the vertical of the places sensor plane Distance d;
Step 9, the coordinate according to the chess piece obtained in step 7 under depth camera coordinate system, by the coordinate projection to depth phase On the XOY plane of machine coordinate system, the distance d1 of calculating subpoint to the coordinate origin, in conjunction with the distance obtained in step 8 D, according to Pythagorean theorem D2=d12+d2, the center of circle of chess piece upper surface in space is calculated to the actual range D of sensor, that is, is completed Determination to pieces of chess position in three dimensions.
2. the method for pieces of chess location determination in three dimensions according to claim 1, it is characterised in that:The step In 1, the image of position pieces of chess to be determined is acquired, image preprocessing first is carried out to the image of acquisition, further according to pieces of chess The difference of upper text color determines H, S, V model of red, green word on chess piece according to the meaning of each component in hsv color model It encloses, step includes:
2.1, using the method for perspective transform, space correction is carried out to the pieces of chess image of acquisition, image flame detection is thrown at positive The form of shadow;
2.2, ROI is arranged to the image after perspective transform;
2.3, further according to the difference of text color on pieces of chess, according to the meaning of each component in hsv color model, it is determined that chess Each component value of H, S, V of red, the green word of son is:The value range of red H, S, V are respectively 1-15,60-255,0-93, green H, the value range of S, V distinguish 0-95,0-255,0-93.
3. the method for pieces of chess location determination in three dimensions according to claim 2, which is characterized in that the step In 7, according to the point (W under world coordinate systemp) with color camera coordinate system under point (Kp) transformational relation:Kp=Rrgb*Wp+ TrgbAnd point (the W under world coordinate systemp) with depth camera coordinate system under point (Kdp) transformational relation:Kdp=Rd*Wp+Td, meter The relationship between Kinect color cameras coordinate system and depth camera coordinate system is calculated, it is calibrated interior further according to depth camera Ginseng, the spatial point under depth camera coordinate system is transformed under depth pixel coordinate system, is realized under color camera coordinate system The matching of spatial point and the depth pixel point under depth image coordinate system, specific matching process are as follows:
Point (W under world coordinate systemp) with color coordinate under point (Kp) transformational relation be:Kp=Rrgb*Wp+Trgb1. the world Point (W under coordinate systemp) with depth camera coordinate system under point (Kdp) transformational relation be:Kdp=Rd*Wp+Td2. by 1. formula, :
It 3. formula will bring 2. formula into, obtain:
Point K under color camera coordinate systempWith the point K under depth camera coordinate systemdpMeet:Kdp=R*Kp+ T 5., wherein R, T It is the spin matrix and translation vector between two coordinates;
In conjunction with 4. 5. formula, obtain:
Again will 6. 7. 5. formula substitutes into, you can the conversion of point and the point under depth camera coordinate system under color camera coordinate system is closed System:
Again by KdpIt is multiplied by the internal reference of depth camera, depth image corresponding with color camera coordinate system down space point is just obtained and sits Pixel under mark system, thereby realizes the depth picture under the spatial point to depth image coordinate system under color camera coordinate system The matching of vegetarian refreshments.
4. the computational methods of pieces of chess height in three dimensions, it is characterised in that:Include the following steps:
Step 1, acquire the image of position pieces of chess to be determined, first to the image of acquisition carry out image preprocessing, further according to as The difference of text color on chess chess piece, according to the meaning of each component in hsv color model, determine the H of red, green word on chess piece, S, V ranges are respectively split red, green chess piece, obtain two width bianry images;
Step 2, two width bianry images after respectively dividing to red, green chess piece carry out linear fusion, and two width bianry images are melting After conjunction, the noise spot generated in each comfortable extraction can mutually be covered, obtain a secondary no noise spot and include red, green chess The bianry image of son;
Each of step 3, dilation operation is carried out to the bianry image after fusion, connects adjacent element in image, make to extract Region where chess piece becomes single, orthogonal connected domain;
Step 4, contours extract is carried out to the chess piece in image after expansion, and draws the circumscribed circle of respective profile, pass through circumscribed circle Determine the position in the center of circle, i.e. position of 32 chess pieces under image coordinate system;
Step 5, plane scaling board being positioned over where 32 chess piece upper surfaces, then pass through the calibration function in OpenCV CalibrateCamera calculates the inside and outside ginseng (H of the color camera of Kinectrgb,Rrgb,Trgb), then by pinhole camera model The conversion relational expression of image coordinate system and world coordinate system further calculates position of the chess piece under world coordinate system, i.e. chess piece Position in space;
Step 6, according to world coordinate system to the transformational relation of color camera coordinate system, show that color camera coordinate system plays 32 chesses The three-dimensional coordinate of son one of chooses under the coordinate system coordinate points A of chess piece, does A points to color camera coordinate origin O Vector
Step 7, according to the spin matrix R of color camera coordinate system and world coordinate systemrgb, will be vectorialIt is transformed into world coordinates Under systemI.e.The world coordinates of known selected point A, further according to vectorThe generation can uniquely be acquired 1 (the X of position of Kinect color cameras under boundary's coordinate system1, Y1, Z1), Z1As camera is to the vertical of plane where chess piece upper surface Distance;
Step 8, plane scaling board being positioned over where chessboard surface, acquires the scene image, and two deutero-albumoses are carried out to color camera Fixed (Hrgb,Rrgb,Trgb), any point B on scaling board is then selected, according to world coordinate system and camera in camera pinhole model Point B under world coordinate system is transformed under camera coordinates system by the conversion relational expression of coordinate system, and is done B points and sat to color camera The vector of mark system origin O
Step 9, according to the spin matrix R of color camera coordinate system after secondary calibration and world coordinate systemrgb, will be vectorialConversion To under world coordinate systemI.e.The world coordinates of known selected point B, further according to vectorIt can be only One acquires the 2 (X of position of Kinect color cameras under the world coordinate system2, Y2, Z2), i.e., plane where camera to board surface Vertical range is Z2
Step 10, according to formula h=z2-z1, h is the actual height of pieces of chess.
5. the computational methods of pieces of chess height in three dimensions according to claim 4, it is characterised in that:The step In 1, the image of position pieces of chess to be determined is acquired, image preprocessing first is carried out to the image of acquisition, further according to pieces of chess The difference of upper text color determines H, S, V model of red, green word on chess piece according to the meaning of each component in hsv color model It encloses, step includes:
2.1, using the method for perspective transform, space correction is carried out to the pieces of chess image of acquisition, image flame detection is thrown at positive The form of shadow;
2.2, ROI is arranged to the image after perspective transform;
2.3, further according to the difference of text color on pieces of chess, according to the meaning of each component in hsv color model, it is determined that chess Each component value of H, S, V of red, the green word of son is:The value range of red H, S, V are respectively 1-15,60-255,0-93, green H, the value range of S, V distinguish 0-95,0-255,0-93.
CN201810374622.9A 2018-04-24 2018-04-24 Method for determining positions of chess pieces in three-dimensional space and calculating heights of chess pieces Active CN108550169B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810374622.9A CN108550169B (en) 2018-04-24 2018-04-24 Method for determining positions of chess pieces in three-dimensional space and calculating heights of chess pieces

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810374622.9A CN108550169B (en) 2018-04-24 2018-04-24 Method for determining positions of chess pieces in three-dimensional space and calculating heights of chess pieces

Publications (2)

Publication Number Publication Date
CN108550169A true CN108550169A (en) 2018-09-18
CN108550169B CN108550169B (en) 2021-08-10

Family

ID=63512354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810374622.9A Active CN108550169B (en) 2018-04-24 2018-04-24 Method for determining positions of chess pieces in three-dimensional space and calculating heights of chess pieces

Country Status (1)

Country Link
CN (1) CN108550169B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335224A (en) * 2019-07-05 2019-10-15 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN110342252A (en) * 2019-07-01 2019-10-18 芜湖启迪睿视信息技术有限公司 A kind of article automatically grabs method and automatic grabbing device
CN111798511A (en) * 2020-05-21 2020-10-20 扬州哈工科创机器人研究院有限公司 Chessboard and chessman positioning method and device
CN112784717A (en) * 2021-01-13 2021-05-11 中北大学 Automatic pipe fitting sorting method based on deep learning
CN114734456A (en) * 2022-03-23 2022-07-12 深圳市商汤科技有限公司 Chess playing method, device, electronic equipment, chess playing robot and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030062675A1 (en) * 2001-09-28 2003-04-03 Canon Kabushiki Kaisha Image experiencing system and information processing method
US20040208359A1 (en) * 2001-11-07 2004-10-21 Davar Pishva Image highlight correction using illumination specific hsv color coordinate
US20100015579A1 (en) * 2008-07-16 2010-01-21 Jerry Schlabach Cognitive amplification for contextual game-theoretic analysis of courses of action addressing physical engagements
US20150279016A1 (en) * 2014-03-27 2015-10-01 Electronics And Telecommunications Research Institute Image processing method and apparatus for calibrating depth of depth sensor
CN107766855A (en) * 2017-10-25 2018-03-06 南京阿凡达机器人科技有限公司 Chess piece localization method, system, storage medium and robot based on machine vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030062675A1 (en) * 2001-09-28 2003-04-03 Canon Kabushiki Kaisha Image experiencing system and information processing method
US20040208359A1 (en) * 2001-11-07 2004-10-21 Davar Pishva Image highlight correction using illumination specific hsv color coordinate
US20100015579A1 (en) * 2008-07-16 2010-01-21 Jerry Schlabach Cognitive amplification for contextual game-theoretic analysis of courses of action addressing physical engagements
US20150279016A1 (en) * 2014-03-27 2015-10-01 Electronics And Telecommunications Research Institute Image processing method and apparatus for calibrating depth of depth sensor
CN107766855A (en) * 2017-10-25 2018-03-06 南京阿凡达机器人科技有限公司 Chess piece localization method, system, storage medium and robot based on machine vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WU GUI,TAO JUN: ""Chinese Chess Recognition Algorithm Based on Computer Vision"", 《2014 26TH CHINESE CONTROL AND DECISION CONFERENCE》 *
王殿君: ""基于视觉的中国象棋棋子识别定位技术"", 《清华大学学报(自然科学版)》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110342252A (en) * 2019-07-01 2019-10-18 芜湖启迪睿视信息技术有限公司 A kind of article automatically grabs method and automatic grabbing device
CN110342252B (en) * 2019-07-01 2024-06-04 河南启迪睿视智能科技有限公司 Automatic article grabbing method and automatic grabbing device
CN110335224A (en) * 2019-07-05 2019-10-15 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN110335224B (en) * 2019-07-05 2022-12-13 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN111798511A (en) * 2020-05-21 2020-10-20 扬州哈工科创机器人研究院有限公司 Chessboard and chessman positioning method and device
CN112784717A (en) * 2021-01-13 2021-05-11 中北大学 Automatic pipe fitting sorting method based on deep learning
CN112784717B (en) * 2021-01-13 2022-05-13 中北大学 Automatic pipe fitting sorting method based on deep learning
CN114734456A (en) * 2022-03-23 2022-07-12 深圳市商汤科技有限公司 Chess playing method, device, electronic equipment, chess playing robot and storage medium

Also Published As

Publication number Publication date
CN108550169B (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN108550169A (en) The computational methods of the determination of pieces of chess position and its height in three dimensions
CN107766855B (en) Chessman positioning method and system based on machine vision, storage medium and robot
CN106289106B (en) The stereo vision sensor and scaling method that a kind of line-scan digital camera and area array cameras are combined
CN104992441B (en) A kind of real human body three-dimensional modeling method towards individualized virtual fitting
CN102178530A (en) Method for automatically measuring human body dimensions on basis of three-dimensional point cloud data
CN104008571B (en) Human body model obtaining method and network virtual fitting system based on depth camera
CN109255813A (en) A kind of hand-held object pose real-time detection method towards man-machine collaboration
CN103535960B (en) Human body three-dimensional measurement method based on digital images
CN109816704A (en) The 3 D information obtaining method and device of object
CN107392947A (en) 2D 3D rendering method for registering based on coplanar four point set of profile
CN105701820A (en) Point cloud registration method based on matching area
CN107167093B (en) A kind of the combined type measuring system and measurement method of laser line scanning and shadow Moire
CN106780618A (en) 3 D information obtaining method and its device based on isomery depth camera
US20070098250A1 (en) Man-machine interface based on 3-D positions of the human body
JP2009093611A (en) System and method for recognizing three-dimensional object
CN103948196A (en) Human body data measuring method
CN104408762A (en) Method for obtaining object image information and three-dimensional model by using monocular unit and two-dimensional platform
CN107543496A (en) A kind of stereo-visiuon measurement handmarking point based on speckle image matching
CN110428465A (en) View-based access control model and the mechanical arm grasping means of tactile, system, device
CN108288293A (en) A kind of scaling method based on line-structured light
CN108629756A (en) A kind of Kinect v2 depth images Null Spot restorative procedure
CN106125907B (en) A kind of objective registration method based on wire-frame model
CN107595388A (en) A kind of near infrared binocular visual stereoscopic matching process based on witch ball mark point
CN110648362B (en) Binocular stereo vision badminton positioning identification and posture calculation method
CN109308462B (en) Finger vein and knuckle print region-of-interest positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant