CN108982901A - A kind of rotating speed measurement method of at the uniform velocity rotary body - Google Patents
A kind of rotating speed measurement method of at the uniform velocity rotary body Download PDFInfo
- Publication number
- CN108982901A CN108982901A CN201810615505.7A CN201810615505A CN108982901A CN 108982901 A CN108982901 A CN 108982901A CN 201810615505 A CN201810615505 A CN 201810615505A CN 108982901 A CN108982901 A CN 108982901A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- frame
- matrix
- frame image
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000691 measurement method Methods 0.000 title claims abstract description 19
- 239000011159 matrix material Substances 0.000 claims abstract description 165
- 238000000034 method Methods 0.000 claims abstract description 73
- 238000001514 detection method Methods 0.000 claims abstract description 65
- 238000013519 translation Methods 0.000 claims abstract description 53
- 238000004804 winding Methods 0.000 claims abstract description 42
- 238000005457 optimization Methods 0.000 claims abstract description 21
- 238000005070 sampling Methods 0.000 claims abstract description 6
- 230000009466 transformation Effects 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 12
- 239000000203 mixture Substances 0.000 claims description 10
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000003068 static effect Effects 0.000 claims description 6
- 239000011295 pitch Substances 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 2
- 238000009795 derivation Methods 0.000 claims description 2
- 238000003780 insertion Methods 0.000 claims description 2
- 230000037431 insertion Effects 0.000 claims description 2
- 230000002441 reversible effect Effects 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 claims 1
- 238000007689 inspection Methods 0.000 abstract description 2
- 238000005259 measurement Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 3
- 230000007704 transition Effects 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01P—MEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
- G01P3/00—Measuring linear or angular speed; Measuring differences of linear or angular speeds
- G01P3/36—Devices characterised by the use of optical means, e.g. using infrared, visible, or ultraviolet light
- G01P3/38—Devices characterised by the use of optical means, e.g. using infrared, visible, or ultraviolet light using photographic means
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- Power Engineering (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
A kind of rotating speed measurement method of at the uniform velocity rotary body, it is used for tachometric survey technical field.The present invention needs artificial participation when solving tachometric survey existing for traditional rotating speed measurement method, the problem of unmanned environment is unable to measure at the uniform velocity rotary body revolving speed.The present invention trains YOLO target detection model first, acquire the color image and depth image of at the uniform velocity rotary body in real time using RGBD camera, artificial participation is not needed, the image that the value of the confidence is greater than 0.85 is selected using model, the ORB feature for extracting selection image is matched with the point map in local map, obtains the spin matrix and translation matrix of every frame image using PnP method;Then the spin matrix of g2o and the every frame image of winding inspection optimization are utilized;Straight line fitting finally is carried out using revolving speed of the random sampling consistent method to all frame images, using the intercept of straight line as the revolving speed of at the uniform velocity rotary body;The relative error of tachometric survey simultaneously can achieve within 4%.Present invention could apply to tachometric survey technical field use.
Description
Technical field
The invention belongs to the tachometric survey technical fields of at the uniform velocity rotary body, and in particular to a kind of revolving speed survey of at the uniform velocity rotary body
Amount method.
Background technique
Revolving speed is a particularly important state parameter for rotating object, the small normal operation to industrial circle machine, greatly
Into space flight, noncooperative target is captured, and is required to carry out precise measurement to revolving speed.Traditional tachometric survey mode has very much, such as hard
Part counting method, software counting method, spectrum measurement method, photoelectric sensor method, mechanical centrifugal formula mensuration etc., hardware counting method is root
Revolving speed is measured according to step-by-step counting;Software counting method is obtained after converting analog signals into digital signal by AD conversion
To the method for transient speed;Spectral Analysis Method is to carry out FFT transform to the vibration signal of acquisition and then estimate revolving speed;Mechanical centrifugal
Formula mensuration is, using weight centrifugal force different principle when revolving speed difference, to pass through gear by adding weight to rotating object
Device machinery rotating pointer shows revolving speed;Photoelectric sensor method is usually that photoelectric sensor, electromagnetic sensor are pasted in shaft
Deng measuring revolving speed according to the registration of sensor.Although above traditional rotating speed measurement method can turn article for rotation
Speed measures, but is required to artificially participate in when tachometric survey, can not be applicable in some unmanned environment.
Summary of the invention
The purpose of the present invention is to need artificial participation when solving tachometric survey existing for traditional rotating speed measurement method,
The problem of at the uniform velocity revolving speed of rotary body is unable to measure in unmanned environment.
The technical solution adopted by the present invention to solve the above technical problem is:
A kind of rotating speed measurement method of at the uniform velocity rotary body, the specific steps of this method are as follows:
Step 1: doing pre-training to YOLO target detection model using ImageNet database;It is tested by what is obtained early period
The at the uniform velocity good YOLO target detection model of rotary body image input pre-training is trained, and obtains trained YOLO target detection
Model;
Step 2: extracting ORB characteristic point to the image information of the tested at the uniform velocity rotary body obtained early period, calculated using K mean value
Method carries out cluster and establishes the digital allusion quotation of k fork;
It is C coordinate system, C0 coordinate system, B coordinate system respectively Step 3: setting three coordinate systems;
C coordinate system is camera coordinates system;B coordinate system is at the uniform velocity rotary body body coordinate system, and crosses at the uniform velocity centroid of rotator;
C0 coordinate system is synchronous with the movement of B coordinate system, and camera collects the value of the confidence that first frame YOLO target detection model exports and is greater than
When image equal to 0.85, C0 coordinate system is overlapped with C coordinate system;
The actual motion situation of each coordinate system are as follows: using ground as referential, C coordinate system is static;B coordinate system is with constant angle speed
Degree spin, and with translation;C0 coordinate system is rotated with the spin of at the uniform velocity rotary body, is translated with the translation of at the uniform velocity rotary body,
C0 coordinate system and B coordinate system differ only by a translation matrix;
Step 3 one, the color image and depth image for acquiring at the uniform velocity rotary body in real time using RGBD camera, by acquisition
Color image inputs trained YOLO target detection model and carries out real time target recognitio and detection, if YOLO target detection model
The value of the confidence for exporting a certain frame image is more than or equal to 0.85, then records x, the y of the frame image of YOLO target detection model output,
The region that x, y, w and the h value of the frame image are constituted is denoted as the ROI region of the frame image by w and h value;The value of the confidence is more than or equal to
The set of 0.85 all frame images composition is denoted as set H;
The x, y are the centre coordinate of at the uniform velocity image pixel area where rotary body, and w is image slices at the uniform velocity where rotary body
The width in plain region, h are the height of at the uniform velocity image pixel area where rotary body;
ORB characteristic point in step 3 two, extraction set H in the ROI region of the i-th frame image, by the (i-1)-th frame image
ORB characteristic point is matched with the ORB characteristic point in set H in the ROI region of the i-th frame image, i >=2;It is at the uniform velocity rotated to calculate
Body actual translational velocity under camera coordinates system, makes camera according to identical speed translation, it is ensured that rotary body is regarded in camera always
In wild range;
Step 3 three, according to relative motion principle, then B coordinate system and C0 coordinate system are static, C coordinate system around B coordinate system turn
Dynamic and translation, calculates spin matrix R of the C coordinate system of the i-th frame image in set H relative to C0 coordinate systemiWith translation square
Battle array ti, and by the R of the 1st to the i-th framekAnd tkRear end optimization is carried out in the figure optimization of incoming step 3 four;K=1,2 ..., i;
Step 3 four establishes factor graph, carries out rear end optimization to factor graph with g2o method, i.e., to the in set H the 1st to the i-th
Spin matrix R of the C coordinate system of frame image relative to C0 coordinate systemk, translation matrix tkIt is carried out with the point map in local map excellent
Change, the spin matrix R after being optimizedk, translation matrix tkWith the point map in local map;
If the i-th frame in step 3 five, set H is to be detected frame, winding detection is carried out based on bag of words, judges the
Whether i frame can detect winding, if so, the C coordinate system of the 1st to the i-th frame image of suboptimization is relative to C again0The rotation of coordinate system
Matrix Rk, translation matrix tkWith the point map in local map;If it is not, thening follow the steps four;
If the i-th frame in set H is not to be detected frame, four are thened follow the steps;
Step 4: the spin matrix by the C coordinate system of the i-th frame image in the set H after optimization with respect to C0 coordinate system converts
Spin matrix for C coordinate system with respect to B coordinate system, and the spin matrix by C coordinate system with respect to B coordinate system is converted into a shaft
With the form of a corner, the corresponding rotational speed omega of the i-th frame image is calculatedi;
Step 5: being more than or equal to 0.85 image for every frame the value of the confidence of YOLO target detection model output, it is performed both by
Step 3 is to the process of step 4, until detecting winding twice in step 3 five, RGBD camera stops acquisition image;
Step 6: using random sampling consistent method to the corresponding rotational speed omega of all frame imagesiStraight line fitting is carried out, with straight
Revolving speed of the intercept of line as at the uniform velocity rotary body.
The beneficial effects of the present invention are: the present invention provides a kind of rotating speed measurement method of at the uniform velocity rotary body, the present invention is first
Pre-training is first done to YOLO target detection model using ImageNet database, the tested at the uniform velocity rotary body figure obtained using early period
As the good model of input pre-training is trained;In algorithm operational process, at the uniform velocity rotary body is acquired in real time using RGBD camera
Color image and depth image, and color image is inputted into trained model, if model output result is that the value of the confidence is greater than etc.
In 0.85, then the ORB feature of current frame image is extracted, and matched with the point map in local map, utilize 3D-2D method
Obtain the spin matrix and translation matrix of current frame image;Then every frame after being optimized using g2o and winding detection method
The spin matrix of image;The revolving speed of all frame images is finally calculated using arbitrary sampling method, and carries out straight line fitting, with straight
Revolving speed of the intercept of line as at the uniform velocity rotary body;It is compared to conventional method, method of the invention acquires figure using camera in real time
Picture does not need artificial participation, i.e., the measurement at the uniform velocity rotary body revolving speed, while tachometric survey may be implemented in unmanned environment
Relative error can achieve within 4%.
Method of the invention plays the role of the measurement of object revolving speed under unmanned environment good
Detailed description of the invention
Fig. 1 is a kind of flow chart of at the uniform velocity rotating speed measurement method of rotary body of the present invention;
Fig. 2 is the imaging process schematic diagram of spatial point P of the invention under camera;
Wherein: P (X, Y, Z) is the point under camera coordinates system, and p (x, y) is the point that P is projected under pixel coordinate system;
1 is camera coordinates system C;2 be pixel coordinate system c;3 be optical axis;4 be the incident line;
Fig. 3 is the schematic diagram of the revolving speed of every frame Image estimation of the invention;
Wherein: the unit of revolving speed is revolutions per second (rad/s);For near at winding detection, the figure in box is winding in circle
Enlarged drawing near at detection;
Fig. 4 is the schematic diagram of the revolving speed of every frame Image estimation before winding of the invention detects;
Wherein: RANSAC is random sampling consistent method;
Fig. 5 is the schematic diagram of the revolving speed of every frame Image estimation after winding of the invention detects;
Fig. 6 is the schematic diagram for establishing k fork number dictionary process of invention;
Wherein: k is the number of nodes that k pitches every layer of number, and d is the depth that k pitches number.
Specific embodiment
Further description of the technical solution of the present invention with reference to the accompanying drawing, and however, it is not limited to this, all to this
Inventive technique scheme is modified or replaced equivalently, and without departing from the spirit and scope of the technical solution of the present invention, should all be covered
Within the protection scope of the present invention.
Specific embodiment 1: embodiment is described with reference to Fig. 1.The at the uniform velocity rotary body of one kind described in present embodiment
Rotating speed measurement method, the specific steps of this method are as follows:
Step 1: doing pre-training to YOLO target detection model using ImageNet database;It is tested by what is obtained early period
The at the uniform velocity good YOLO target detection model of rotary body image input pre-training is trained, and obtains trained YOLO target detection
Model;
Step 2: extracting ORB characteristic point to the image information of the tested at the uniform velocity rotary body obtained early period, calculated using K mean value
Method carries out cluster and establishes the digital allusion quotation of k fork;
It is C coordinate system, C0 coordinate system, B coordinate system respectively Step 3: setting three coordinate systems;
C coordinate system is camera coordinates system;B coordinate system is at the uniform velocity rotary body body coordinate system, and crosses at the uniform velocity centroid of rotator;
C0 coordinate system is synchronous with the movement of B coordinate system, and camera collects the value of the confidence that first frame YOLO target detection model exports and is greater than
When image equal to 0.85, C0 coordinate system is overlapped with C coordinate system;
The actual motion situation of each coordinate system are as follows: using ground as referential, C coordinate system is static;B coordinate system is with constant angle speed
Degree spin, and with translation;C0 coordinate system is rotated with the spin of at the uniform velocity rotary body, is translated with the translation of at the uniform velocity rotary body,
C0 coordinate system and B coordinate system differ only by a translation matrix;
Step 3 one, the color image and depth image for acquiring at the uniform velocity rotary body in real time using RGBD camera, by acquisition
Color image inputs trained YOLO target detection model and carries out real time target recognitio and detection, if YOLO target detection model
The value of the confidence for exporting a certain frame image is more than or equal to 0.85, then records x, the y of the frame image of YOLO target detection model output,
The region that x, y, w and the h value of the frame image are constituted is denoted as the ROI region of the frame image by w and h value;The value of the confidence is more than or equal to
The set of 0.85 all frame images composition is denoted as set H;
The x, y are the centre coordinate of at the uniform velocity image pixel area where rotary body, and w is image slices at the uniform velocity where rotary body
The width in plain region, h are the height of at the uniform velocity image pixel area where rotary body;
For example picture is 60*60, centre coordinate is 30*30, then this x, y are exactly 0.5,0.5;
The process of target detection is unified for single Neural, which predicts mesh using whole image information
The classification that target is identified while target bounding boxes, realizes end-to-end real-time target Detection task.YOLO first will
Image is divided into the grid (grid cell) of S × S.As soon as the grid is responsible for detecting the mesh if the center of target falls into grid
Mark.The value of the confidence (the confidence of the prediction of each grid (grid cel l) bounding boxes (B) and the boxes
score).The value of the confidence represents the confidence level that box includes a target.Then, defining the value of the confidence isIf without target, the value of the confidence zero.Additionally, it is desirable that the value of the confidence and ground of prediction
The intersection overunion (IOU) of truth is identical.Each bounding box include 5 value: x, y, w, h and
confidence.(x, y) represents the center of box relevant to grid.(w, h) is the width and height of box relevant to full figure information.
Confidence represents IOU the and gound truth of prediction boxes.
The pixel region of image where YOLO detects a target, the center of this pixel region is x, y pixel region
Wide and height is w, h.But x, y, w, h are a ratio, are the actual centre coordinate of this frame, the width height of frame and entire figure
One ratio of picture.
Camera and noncooperative target position are opposing stationary, therefore determine pixel region where a frame noncooperative target, other frames
Region where picture noncooperative target it is also known that, do not need to detect again.
ORB characteristic point in step 3 two, extraction set H in the ROI region of the i-th frame image, by the (i-1)-th frame image
ORB characteristic point is matched with the ORB characteristic point in set H in the ROI region of the i-th frame image, i >=2;It is at the uniform velocity rotated to calculate
Body actual translational velocity under camera coordinates system, makes camera according to identical speed translation, it is ensured that rotary body is regarded in camera always
In wild range;
Step 3 three, according to relative motion principle, then B coordinate system and C0 coordinate system are static, C coordinate system around B coordinate system turn
Dynamic and translation, calculates spin matrix R of the C coordinate system of the i-th frame image in set H relative to C0 coordinate systemiWith translation square
Battle array ti, and by the R of the 1st to the i-th framekAnd tkRear end optimization is carried out in the figure optimization of incoming step 3 four;K=1,2 ..., i;
Step 3 four establishes factor graph, carries out rear end optimization to factor graph with g2o method, i.e., to the in set H the 1st to the i-th
Spin matrix R of the C coordinate system of frame image relative to C0 coordinate systemk, translation matrix tkIt is carried out with the point map in local map excellent
Change, the spin matrix R after being optimizedk, translation matrix tkWith the point map in local map;
If the i-th frame in step 3 five, set H is to be detected frame, winding detection is carried out based on bag of words, judges the
Whether i frame can detect winding, if so, the C coordinate system of the 1st to the i-th frame image of suboptimization is relative to C again0The rotation of coordinate system
Matrix Rk, translation matrix tkWith the point map in local map;If it is not, thening follow the steps four;
If the i-th frame in set H is not to be detected frame, four are thened follow the steps;
Step 4: the spin matrix by the C coordinate system of the i-th frame image in the set H after optimization with respect to C0 coordinate system converts
Spin matrix for C coordinate system with respect to B coordinate system, and the spin matrix by C coordinate system with respect to B coordinate system is converted into a shaft
With the form of a corner, the corresponding rotational speed omega of the i-th frame image is calculatedi;
Step 5: being more than or equal to 0.85 image for every frame the value of the confidence of YOLO target detection model output, it is performed both by
Step 3 is to the process of step 4, until detecting winding twice in step 3 five, RGBD camera stops acquisition image;
Step 6: using random sampling consistent method to the corresponding rotational speed omega of all frame imagesiStraight line fitting is carried out, with straight
Revolving speed of the intercept of line as at the uniform velocity rotary body.
If the i-th frame image meets the optimal conditions of step 3 five, what is utilized in step 4 is by five winding of step 3
Spin matrix after inspection optimization, if the i-th frame image is unsatisfactory for the optimal conditions of step 3 five, what step 4 utilized is step
Spin matrix after three or four optimizations;
Estimate spin matrix R of the C coordinate system with respect to C0 coordinate systemcc0With translation matrix tcc0Afterwards, former based on coordinate conversion
Reason, can find out spin matrix R of the C coordinate system with respect to B coordinate systemcbWith translation matrix tcb;
B coordinate system crosses the mass center of at the uniform velocity rotary body in present embodiment, and the y-axis of B coordinate system is not necessarily overlapped with shaft.
, may be with translation when B coordinate system is spinned with constant angular velocity, but need depending on actual conditions, if rotary body has translation
Then B coordinate system just moves, if rotary body does not translate, B coordinate system is not moved.
Dictionary is established using the at the uniform velocity rotary body image that early period obtains, is prepared for the detection of subsequent winding;
Establish the process of dictionary are as follows:
ORB characteristic point is extracted to the noncooperative target pictorial information that early period obtains first, using K mean algorithm to ORB spy
Sign point is clustered, and the k of the result after cluster is pitched tree representation, such as Fig. 6, in the root node of k fork tree, with K mean algorithm institute
There is ORB characteristic point to be polymerized to k class, each node of tree first layer is pitched to k, the ORB characteristic point for belonging to the node is polymerized to k class, is obtained
To next layer of each node;And so on, each node of leaf layer is finally obtained, each node of leaf layer is dictionary
In " Words " (word);
The intermediate node of k fork number in present embodiment is convenient for quickly lookup.
Specific embodiment 2: tachometric survey side of the present embodiment to the at the uniform velocity rotary body of one kind described in embodiment one
Method is further limited, step 1 detailed process in present embodiment are as follows:
YOLO target detection model is a convolutional neural networks, it includes that 24 convolutional layers, 4 pond layers and 2 are complete
Articulamentum is respectively:
1 convolution kernel is having a size of 7*7, the convolutional layer that number is 64;
The maximum pond layer of 1 2*2;
1 convolution kernel is having a size of 3*3, the convolutional layer that number is 192;
The maximum pond layer of 1 2*2;
1 convolution kernel is having a size of 1*1, the convolutional layer that number is 128;
1 convolution kernel is having a size of 3*3, the convolutional layer that number is 256;
1 convolution kernel is having a size of 1*1, the convolutional layer that number is 256;
1 convolution kernel is having a size of 3*3, the convolutional layer that number is 512;
The maximum pond layer of 1 2*2;
4 convolution kernels are having a size of 1*1, the convolutional layer that number is 256;
4 convolution kernels are having a size of 3*3, the convolutional layer that number is 512;
1 convolution kernel is having a size of 1*1, the convolutional layer that number is 512;
1 convolution kernel is having a size of 3*3, the convolutional layer that number is 1024;
The maximum pond layer of 1 2*2;
2 convolution kernels are having a size of 1*1, the convolutional layer that number is 512;
2 convolution kernels are having a size of 3*3, the convolutional layer that number is 1024;
2 convolution kernels are having a size of 3*3, the convolutional layer that number is 1024;
2 convolution kernels are having a size of 3*3, the convolutional layer that number is 1024;
2 full articulamentums;
Pre-training is done to YOLO target detection model using the data of ImageNet database, obtains YOLO target detection mould
The initial weight of each layer of type, then by early period obtain at the uniform velocity rotary body image be input to YOLO target detection model, constantly into
Row iteration training adjustment YOLO target detection model weight, until the loss function value of YOLO target detection model is less than given threshold
Value 0.2 obtains each layer of YOLO target detection model at this time of weight, obtains trained YOLO target detection model.
Specific embodiment 3: tachometric survey side of the present embodiment to the at the uniform velocity rotary body of one kind described in embodiment two
Method is further limited, early period described in present embodiment obtain at the uniform velocity rotary body image size be 448 ×
448, amount of images is greater than 800, and has carried out label.
The intermediate node of k fork number in present embodiment is convenient for quickly lookup.
Specific embodiment 4: tachometric survey side of the present embodiment to the at the uniform velocity rotary body of one kind described in embodiment three
Method is further limited, and step 3 two calculates at the uniform velocity rotary body actual translation under camera coordinates system in present embodiment
The detailed process of speed are as follows:
If coordinate of the ORB characteristic point of the (i-1)-th frame image under C coordinate system is (X, Y, Z) in set H, around C coordinate
The angular speed for being three axis is (Ω1, Ω2, Ω3), the translational velocity of opposite three axis of C coordinate system is (TX,TY,TZ), by Euler
Equation obtains, speed of (X, the Y, Z) point under C coordinate system along X-axis, Y-axis and Z-direction are as follows:
With(X, Y, Z) respectively under C coordinate system puts the speed along X-axis, Y-axis and Z-direction;It isWithThe matrix of composition;
By camera model:
Wherein: fx,fy,cxAnd cyIt is the internal reference coefficient of camera, (u, v) is that (X, Y, Z) point under C coordinate system is sat in pixel
Coordinate under mark system;
Above formula (2) and (3) derivation are obtained respectively:
Wherein: intermediate variableIntermediate variable
It willBring (4) and (5) formula into, arrangement can obtain:
Wherein:For the horizontal velocity of (u, v) under pixel coordinate system;The vertical speed for being (u, v) under pixel coordinate system
Degree;Specifically ask method as follows:
Calculation method: the ORB characteristic point to match in ORB characteristic point on the (i-1)-th frame image and the i-th frame image
The ratio of horizontal coordinate difference and the two field pictures time difference,Calculation method: the ORB characteristic point and i-th on the (i-1)-th frame image
The vertical coordinate difference of the ORB characteristic point to match on frame image and the ratio of the two field pictures time difference;On i-th frame image
The depth value of ORB characteristic point is Z0, depth value directly read by the depth image of the i-th frame;
Formula (6) and (7) are organized into following matrix form:
Above-mentioned equation is denoted as AM=N,
Acquire matrix [Ω1 Ω2 Ω3 Tx Ty Tz]TAt least need 3 ORB characteristic points, [Ω1 Ω2 Ω3 Tx Ty
Tz]TIt is the transposed matrix of M;
When ORB feature point number is 3, A has just constituted the matrix of a full rank, and formula (8) becomes:
Wherein: coordinate of the 3 ORB characteristic points of the (i-1)-th frame image under C coordinate system is respectively (X in set H1,Y1,
Z1)、(X2,Y2,Z2) and (X3,Y3,Z3), x '1,y′1For point (X1,Y1,Z1) corresponding intermediate variable, x '2,y′2For point (X2,Y2,
Z2) corresponding intermediate variable, x '3,y′3For point (X3,Y3,Z3) corresponding intermediate variable;
When ORB characteristic point is greater than 3, (8) become:
Wherein:It is horizontal velocity of the pixel (u1, v1) in pixel coordinate system,It is (u1, v1) under pixel coordinate system
Vertical speed;It is horizontal velocity of the pixel (u2, v2) in pixel coordinate system,It is (u2, v2) under pixel coordinate system
Vertical speed;It is horizontal velocity of the pixel (u3, v3) in pixel coordinate system,It is (u3, v3) under pixel coordinate system
Vertical speed;(u1, v1) is (X under C coordinate system1、Y1、Z1) coordinate of the point under pixel coordinate system;(u2, v2) is C coordinate
(X under system2、Y2、Z2) coordinate of the point under pixel coordinate system;(u3, v3) is (X under C coordinate system3、Y3、Z3) put and sat in pixel
Coordinate under mark system;
M is solved using the pseudoinverse of matrix A, that is, finds out at the uniform velocity instantaneous translation of the rotary body in the i-th frame under C coordinate system
Speed, camera is before detecting next frame image with Tx, TyAnd TzSpeed at the uniform velocity move;
M=(ATA)-1ATN (11)
Wherein: ATIt is the transposed matrix of matrix A, (ATA)-1It is matrix ATWith the inverse matrix after matrix A product.
WithSeek method: the ORB characteristic point (X of the i-th frame image in set H1,Y1,Z1) good with i+1 frame images match
ORB characteristic point horizontal coordinate difference two field pictures time difference adjacent with this ratio;It is special for the ORB of the i-th frame image in set H
Levy point (X1,Y1,Z1) adjacent with this with the vertical coordinate difference for the ORB characteristic point that consecutive frame image matches two field pictures time difference
Ratio;
WithSeek method: the ORB characteristic point (X of the i-th frame image in set H2,Y2,Z2) good with i+1 frame images match
ORB characteristic point horizontal coordinate difference two field pictures time difference adjacent with this ratio;It is special for the ORB of the i-th frame image in set H
Levy point (X2,Y2,Z2) adjacent with this with the vertical coordinate difference for the ORB characteristic point that consecutive frame image matches two field pictures time difference
Ratio;
WithSeek method: the ORB characteristic point (X of the i-th frame image in set H3,Y3,Z3) good with i+1 frame images match
The ratio of the horizontal coordinate difference of ORB characteristic point two field pictures time difference adjacent with this;For the ORB feature of the i-th frame image in set H
Point (X3,Y3,Z3) adjacent with this with the vertical coordinate difference for the ORB characteristic point that consecutive frame image matches two field pictures time difference
Ratio;
In order to avoid accidental ORB characteristic point measurement error, 3 or more ORB characteristic points are used.Algorithm, which is used, is greater than 3
ORB characteristic point, more accurately.
After calculating the at the uniform velocity translational velocity of rotary body in present embodiment, camera can be made with the translational velocity with object
It is mobile, guarantee that camera can acquire the color image and depth image of at the uniform velocity rotary body at any time;Each frame image includes each time
The color image and depth image of acquisition.
Specific embodiment 5: tachometric survey side of the present embodiment to the at the uniform velocity rotary body of one kind described in embodiment four
Method is further limited, and step 3 three estimates the C coordinate system of the i-th frame image in set H relative to C0 in present embodiment
The spin matrix R of coordinate systemiWith translation matrix tiDetailed process are as follows:
First frame image in set H is set as the first key frame, using C0 coordinate system as world coordinate system, is closed according to first
ORB characteristic point in first key frame images is converted to the three-dimensional under world coordinate system by the deep image information of key frame image
Point map is put into local map;
The selection principle of next key frame: the ORB feature of 20 frames or present frame is alreadyd exceed away from previous keyframe insertion
Point and at least 50 points on the point map successful match in local map;
ORB characteristic point in first key frame images is converted into the three-dimensional map point under world coordinate system, and is put into office
After in portion's map, the point map in local map is matched with the ORB characteristic point of the 2nd frame image in set H, and update office
Portion's map;And so on, by the ORB characteristic point of the i-th frame image in set H and after the (i-1)-th frame image update local map
Point map matched, and update local map;
Updating principle is: giving a threshold value, when the point matched is greater than threshold value, deletes in local map not current
Point map in frame camera fields of view;When the point matched is less than or equal to threshold value, all ORB characteristic points of present frame are converted
At point map, it is put into local map;
For each key frame, then ORB characteristic point all in key frame is converted into point map deposit local map;
3D- is used according to the ORB characteristic point of the point map matched and the i-th frame image for the i-th frame image in set H
2D method calculates the spin matrix R of the i-th frame imageiWith translation matrix ti。
The 3D-2D:PnP method of present embodiment is described below:
Camera by the coordinate points in three-dimensional world be mapped to two dimensional image plane process can with a geometrical model into
Row description, there are many kinds of this models, wherein simplest is pin-hole model, it describe Ray Of Light by after pin hole in needle
The process of hole rear projection imaging.As shown in Fig. 2, illustrating imaging process of the one point P of space under camera.
It indicates the pixel coordinate of P point with u, v, coordinate of the pixel under camera coordinates system is indicated with X, Y, Z.Then have:
Wherein, fx,fy,cx,cyIt is the internal reference coefficient of camera, we use K table to the matrix that camera internal reference coefficient is constituted below
Show
It enables
PCFor the normalized coordinate of pixel.
The outer ginseng of camera is transition matrix R, t of camera coordinates system and world coordinate system, and wherein R is spin matrix (3*3
Dimension), t is translation matrix (3*1 dimension).If P point is X in the coordinate of world coordinate systemW,YW,ZW, then:
It is indicated with normalized coordinate:
3D-2D problem be described as give two groups of feature point sets, match each other, wherein one group be 3D true coordinate, another group
For two-dimensional pixel coordinate, estimation between the two is sought.Briefly, the estimation of 3D-2D is from 3d space XK-1To pair
Answer two-dimensional image vegetarian refreshments pKBetween estimation, obtain the spin matrix R of two interframe cameraskWith translation matrix tk.In the algorithm
In, we mainly match the ORB characteristic point on 3D point and two dimensional image in local map.General description at
WhereinIt is three-dimensional space point [X Y Z]TAccording to camera motion matrix re-projection to the pixel in next frame image
Coordinate.
[if x y 1]TIt is that pixel coordinate fastens normalized coordinate a little, [X Y Z]TIt is the three-dimensional point that world coordinates is fastened,
Then equation is 0 when above formula minimum, we define augmented matrix Tk'=[Rk|tk] be 3 × 4 matrix.
Enabling above formula is 0, can be write as with the form of normalized coordinate:
It spreads out, enables T1, T2, T3Respectively Tk' the first row, the second row, the third line, then the calculation for passing through linear transformation
Method acquires transition matrix T 'k=[Rk tk]。
When having 6 clock synchronizations just, then an available system of homogeneous linear equations AT=0, being SVD to A can solve
T outk', and then acquire spin matrix RkWith translation matrix tk。
Three-dimensional point in local map and the ORB characteristic point on each frame image are subjected to 3D-2D calculating, can be obtained each
The spin matrix and translation matrix of frame camera coordinates system.
Specific embodiment 6: tachometric survey side of the present embodiment to the at the uniform velocity rotary body of one kind described in embodiment five
Method is further limited, the detailed process of step 3 four in present embodiment are as follows:
The transformation matrix T of kth frame image in set Hk;
Wherein: RkIt is the corresponding spin matrix of kth frame image, t in set HkIt is the corresponding translation matrix of kth frame image;0
Be a 3*1 dimension 0 matrix, 0TIt is the transposed matrix of 0 matrix;
By the C coordinate system of the 1st frame to the i-th frame image relative to C0The transformation matrix T of coordinate system1,T2,…TiLocally
Each point map O in figurejRespectively as variable node optimised in factor graph, it is denoted as variable node P1, P2 ... respectively,
Pk ..., Pi and Q1, Q2 ..., Qj ..., Qm;J=1,2 ..., m, m are the total numbers of point map in the i-th frame local map;
It willAs the factor nodes in factor graph between variable node Pk and variable node Qj;
Wherein: (u '0、v′0) indicate kth frame image in the point map O in local mapjThe ORB characteristic point to match
Coordinate;(u0、v0) indicate local map on point map OjBy transformation matrix TkProject to the coordinate that pixel coordinate is fastened;|
|·||2It is two norms;
Point map O in local mapjBy transformation matrix TkProject to pixel coordinate system:
(X′j,Y′j,Z′j) indicate local map on point map OjThree-dimensional coordinate under world coordinate system;
According to the C coordinate system of the 1st frame to the i-th frame image relative to C0The transformation matrix T of coordinate system1,T2,…Ti, factor section
Each point map O in point, variable node and local mapj, a factor graph is established, it is excellent to carry out rear end to factor graph with g2o
Change;
Spin matrix R of the C coordinate system of 1st to the i-th frame image relative to C0 coordinate system in set H after being optimizedk, it is flat
Move matrix tkWith the point map in local map, it is denoted as intermediate variable X*,
WhereinT1:iIt is T1,T2,…,TiThe matrix of composition, O1;mIt is O1,O2,…,Om
The matrix of composition.
G2o figure optimization process is described below:
If only using PnP to estimate pose, the result of estimation can be made to have accumulated error, and then generate track drift.Cause
This is needed over time to all camera pose (R estimatedk, tk) and local map on point map carry out it is global excellent
Change.
It is assumed that camera is in pose xkWhen, geodetic graph point yjCoordinate under pixel coordinate system is zkj, there is measurement error at this time
vkj, and assume measurement error Gaussian distributed, for once observing:
zkj=h (yj,xk)+vkj
Assuming that noise item vkj~N (0, Qkj) so, observe the conditional probability of data are as follows: P (zkj|xk,yj)=N (h (yj,
xk),Qkj);
It is still a Gaussian Profile, makes its maximized x to calculatekAnd yj, we use the side for minimizing negative logarithm
Formula seeks the maximum likelihood of Gaussian Profile.
Consider any higher-dimension Gaussian Profile x~N (0, Qkj), its probability density function is taken into negative logarithm, then is had:
Maximization is asked to original distribution, is equivalent to and minimum is asked to negative logarithm, when minimizing the x of above formula, first item and x without
It closes, can remove, need to only optimize the quadratic forms on right side.It brings observational equation into, is equivalent to and asks:
x*=argmin ((zkj-h(yj,xk))TQkj -1(zkj-h(yj,xk)))
Enable zkj-h(yj,xk)=ejk, it is contemplated that road sign point and pose in all frames, then error function can be converted down
Formula:
Find x appropriatekAnd yjKeep above formula minimum, is converted to nonlinear least square problem.
Factor graph is a kind of non-directed graph, is made of two kinds of nodes: indicating the variable node of optimized variable and indicates the factor
Factor nodes.Optimised variable node indicates that the factor is indicated with rectangle with circle.One factor is connected to several nodes, table
Show the joint probability distribution between these nodes.In pose estimation, optimised amount is x and y, and the factor is error ejk.Cause
The essence of subgraph optimization is optimization problem to be converted into a weighting non-linear least square problem, by adjusting each node
Value keeps the error function value of system minimum.
If XkThe matrix constituted for the k from the moment 0 to the moment all poses;YjIt is all from point map 0 to point map j
The matrix that point map is constituted;X is XkAnd YjThe matrix of composition.
Assuming thatWithIt is an initial estimation, there are a little increment △ XkWith △ Yj.Then ejkShape can be written as follow
Formula:
Wherein, JjkIt is ejk?The Jacobian matrix at place.When there is increment, small point of each of error function F (x)
Amount can be expressed as follows:
Formula is brought into:
WithInstead ofThen have:
It willUse CjkIt indicates, it willUse bjkIt indicates, by Jjk TQkj -1JjkUse HjkIndicate, then on
The reduced form that formula can be written as follow:
So,
So our purpose is to find △ X to keep the increment of error minimum, it may be assumed that
It obtains
H△X*=-b
It is required that solution X*=X+ △ X*。
When the pose and very big point map quantity to be solved, H-matrix, which is inverted, can become very complicated, make full use of H
The sparsity of matrix and marginality solve.
In this way, we obtain the point map on optimised all position auto―controls and all local maps.
Specific embodiment 7: tachometric survey side of the present embodiment to the at the uniform velocity rotary body of one kind described in embodiment six
Method is further limited, the detailed process of step 3 five in present embodiment are as follows:
Be detected frame selection principle: by step 3 four export it is optimised after the i-th frame image C coordinate system with respect to C0
The spin matrix R of coordinate systemi, acquisition the 1st frame image to the i-th frame image used in the time, estimate rotational speed omegai, according to rotational speed omegai
Estimation occur winding time t, if the time used in the 1st frame image to the i-th frame image in the time range of 0.7t to 1.3t,
The frame image is to be detected frame;
If the i-th frame after the value of the confidence of the color image detected is more than or equal to 0.85 is to be detected frame, it is based on bag of words
Model carries out winding detection, process are as follows:
After step 2 has established the dictionary that k pitches several classes of types, using forward direction index and reverse indexing to detected frame image
Arbitrary characteristics fiFind corresponding word w in dictionaryl, scored, obtained according to the frequency utilization TF-IDF that each word occurs
With the distribution for detected frame image word list in dictionary that vector indicates;
Calculate the similarity being detected between frame and first frame after scoring;
Assuming that first frame image uses vector v after scoring by TF-IDFAIt indicates, after being detected frame by TF-IDF scoring
Use vector vBIt indicates, then vector vAAnd vector vBThe similarity of corresponding two interframe are as follows:
Wherein: vAi′It is vAIn the i-th ' a element, vBi′It is vBIn the i-th ' a element, i '=1,2 ..., N ';|vAi′| and
|vBi′| it is v respectivelyAi′And vBi′Absolute value, | vAi′-vBi′| it is vAi′With vBi′Absolute value of the difference;N ' is the total of detected frame
Characteristic Number;
Given threshold value can detect that winding, then the C of the 1st to the i-th frame image of suboptimization are sat if similarity is greater than threshold value
Mark system is relative to C0The spin matrix R of coordinate systemk, translation matrix tkWith the point map in local map;If similarity is less than or equal to
Threshold value thens follow the steps four;
If the i-th frame is not to be detected frame, four are thened follow the steps;
0≤threshold value≤1;
The C coordinate system of the 1st to the i-th frame image of suboptimization again is relative to C0The spin matrix R of coordinate systemk, translation matrix
tkWith the process of the point map in local map:
With | | TA-TK||2It is added between variable node P1 and variable node PK as new factor nodes, it is further excellent
Change the 1st frame to the C coordinate system of the i-th frame image relative to C0The spin matrix R of coordinate systemk, translation matrix tkIn local map
Point map;TAFor the transformation matrix of first frame image, TKTo find out the change with the highest detected frame K of first frame image similarity
Change matrix.
The calculation formula of time that winding occurs for the estimation in present embodiment is
Specific embodiment 8: tachometric survey side of the present embodiment to the at the uniform velocity rotary body of one kind described in embodiment seven
Method is further limited, the detailed process of step 4 in present embodiment are as follows:
Using Euler's restricted rotational movement theorem, rigid body turns around the random angle rotation of fixed point with around by some axis n of the point
It is the same to cross some angle, θ effect.;
Rcb=Rcc0Rc0b (15)
Wherein: RcbRefer to spin matrix of the C coordinate system with respect to B coordinate system;Rcc0Refer to C coordinate system relative to the rotation of C0 coordinate system
Matrix;Rc0bRefer to spin matrix of the C0 coordinate system with respect to B coordinate system;
C0 coordinate system without spin with respect to B coordinate system, soThat is Rcb=Rcc0, the C seat of the i-th frame image
Mark is the spin matrix R of opposite B coordinate systemcbi=Ri;
The spin matrix R for every frame image that optimization is obtainedcbiIt is converted into the form of a shaft and a corner, this turn
Relationship is changed with Douglas Rodríguez equation to indicate:
Rcbi=cos θiI+(1-cosθi)nini T+sinθini∧ (16)
RcbiIt is spin matrix of the C coordinate system with respect to B coordinate system of the i-th frame image in the set H of at the uniform velocity rotary body;θi
It is corner of the C coordinate system with respect to B coordinate system of the i-th frame image in the set H of at the uniform velocity rotary body;niIt is the collection of at the uniform velocity rotary body
Close the shaft that the C coordinate system of the i-th frame image in H is rotated relative to B coordinate system;I is three-dimensional unit matrix;ni TIt is niTransposition square
Battle array;ni∧ is niMultiplication cross matrix;n′、niThe element of " and n " ' be in vector,
Then:
tr(Rbci)=cos θitr(I)+(1-cosθi)tr(nini T)+sinθitr(ni∧)
=3cos θi+(1-cosθi)
=1+2cos θi
So
Wherein: tr (Rbci) it is matrix RcbiMark, tr (I) is the mark of matrix I, tr (nini T) it is matrix nini TMark;tr
(niIt ∧) is matrix niThe mark of ∧;
For detecting the image before winding for the first time, τ is usediWhen indicating used in the 1st frame image to the i-th frame image
Between, then the revolving speed measured when the i-th frame is ωi=θi/τi;For being detected after winding for the first time to detecting winding for the second time
Image before, the then revolving speed measured when the i-th frame are ωi=(θi+2π)/τi。
Embodiment
Using method of the invention, to certain, at the uniform velocity rotary body carries out tachometric survey, and the actual speed of the rotary body is
0.2166rad/s, RGBD camera acquire 680 frame of data altogether, and the revolving speed of every frame Image estimation is as shown in Figure 3:
After winding detection is added, tachometer value becomes accurate at once.As seen from Figure 3, winding has occurred in 348 frame
Detection.The data of (the 348th frame to the 680th frame) after (preceding 348 frame), winding detection before winding detection are returned respectively with RANSAC
Return processing, as shown in Figure 4 and Figure 5, wherein x-axis indicates corresponding frame, and y-axis indicates the revolving speed that every frame acquires.
Speed estimate initial data and RANSAC fitted figure before Fig. 4 winding detects;
The linear equation that data before winding detection are 0 with the slope that RANSAC is fitted are as follows:
Y=0.1779
Speed estimate initial data and RANSAC fitted figure after the detection of Fig. 5 winding
The linear equation that data after winding detection are 0 with the slope that RANSAC is fitted are as follows:
Y=0.2085
Tachometer value and relative error before winding is detected, after winding detection are summarized as being as follows:
1 winding of table detection front and back speed estimate value
We using winding detect after revolving speed as final tachometric survey as a result, the relative error of measurement result is
3.74%, it is more accurate.
Claims (8)
1. a kind of rotating speed measurement method of at the uniform velocity rotary body, which is characterized in that the specific steps of this method are as follows:
Step 1: doing pre-training to YOLO target detection model using ImageNet database;By early period obtain it is tested at the uniform velocity
The good YOLO target detection model of rotary body image input pre-training is trained, and obtains trained YOLO target detection mould
Type;
Step 2: to early period obtain tested at the uniform velocity rotary body image information extract ORB characteristic point, using K mean algorithm into
Row cluster establishes k and pitches digital allusion quotation;
It is C coordinate system, C0 coordinate system, B coordinate system respectively Step 3: setting three coordinate systems;
C coordinate system is camera coordinates system;B coordinate system is at the uniform velocity rotary body body coordinate system, and crosses at the uniform velocity centroid of rotator;C0 is sat
Mark system is synchronous with the movement of B coordinate system, and camera collects the value of the confidence that first frame YOLO target detection model exports and is more than or equal to
When 0.85 image, C0 coordinate system is overlapped with C coordinate system;
The actual motion situation of each coordinate system are as follows: using ground as referential, C coordinate system is static;B coordinate system with constant angular velocity from
Rotation, and with translation;C0 coordinate system is rotated with the spin of at the uniform velocity rotary body, is translated with the translation of at the uniform velocity rotary body, and C0 is sat
Mark system differs only by a translation matrix with B coordinate system;
Step 3 one, the color image and depth image for acquiring at the uniform velocity rotary body in real time using RGBD camera, by the colour of acquisition
Image inputs trained YOLO target detection model and carries out real time target recognitio and detection, if YOLO target detection model exports
The value of the confidence of a certain frame image is more than or equal to 0.85, then records x, y, w and the h of the frame image of YOLO target detection model output
The region that x, y, w and the h value of the frame image are constituted is denoted as the ROI region of the frame image by value;The value of the confidence is more than or equal to 0.85
All frame images composition set be denoted as set H;
The x, y are the centre coordinate of at the uniform velocity image pixel area where rotary body, and w is image pixel area at the uniform velocity where rotary body
The width in domain, h are the height of at the uniform velocity image pixel area where rotary body;
ORB characteristic point in step 3 two, extraction set H in the ROI region of the i-th frame image, the ORB of the (i-1)-th frame image is special
Sign point is matched with the ORB characteristic point in set H in the ROI region of the i-th frame image;To calculate at the uniform velocity rotary body in camera seat
Actual translational velocity under mark system, makes camera according to identical speed translation, it is ensured that rotary body is always within the scope of camera fields of view;
Step 3 three, according to relative motion principle, then B coordinate system and C0 coordinate system are static, C coordinate system around B coordinate system rotation and
Translation, calculates spin matrix R of the C coordinate system of the i-th frame image in set H relative to C0 coordinate systemiWith translation matrix ti,
And by the R of the 1st to the i-th framekAnd tkRear end optimization is carried out in the figure optimization of incoming step 3 four;K=1,2 ..., i;
Step 3 four establishes factor graph, carries out rear end optimization to factor graph with g2o method, i.e., to the 1st to the i-th frame figure in set H
Spin matrix R of the C coordinate system of picture relative to C0 coordinate systemk, translation matrix tkIt is optimized with the point map in local map,
Spin matrix R after being optimizedk, translation matrix tkWith the point map in local map;
If the i-th frame in step 3 five, set H is to be detected frame, winding detection is carried out based on bag of words, judges the i-th frame
Whether winding can be detected, if so, the C coordinate system of the 1st to the i-th frame image of suboptimization is relative to C again0The spin matrix of coordinate system
Rk, translation matrix tkWith the point map in local map;If it is not, thening follow the steps four;
If the i-th frame in set H is not to be detected frame, four are thened follow the steps;
It is sat Step 4: converting C with respect to the spin matrix of C0 coordinate system for the C coordinate system of the i-th frame image in the set H after optimization
Mark is the spin matrix of opposite B coordinate system, and the spin matrix by C coordinate system with respect to B coordinate system is converted into a shaft and one
The form of a corner calculates the corresponding rotational speed omega of the i-th frame imagei;
Step 5: being more than or equal to 0.85 image for every frame the value of the confidence of YOLO target detection model output, it is performed both by primary
Step 3 is to the process of step 4, until detecting winding twice in step 3 five, RGBD camera stops acquisition image;
Step 6: using random sampling consistent method to the corresponding rotational speed omega of all frame imagesiStraight line fitting is carried out, with straight line
Revolving speed of the intercept as at the uniform velocity rotary body.
2. a kind of rotating speed measurement method of at the uniform velocity rotary body according to claim 1, which is characterized in that the step 1
Detailed process are as follows: YOLO target detection model is a convolutional neural networks, it includes 24 convolutional layers, 4 pond layers and 2
Full articulamentum is respectively:
1 convolution kernel is having a size of 7*7, the convolutional layer that number is 64;
The maximum pond layer of 1 2*2;
1 convolution kernel is having a size of 3*3, the convolutional layer that number is 192;
The maximum pond layer of 1 2*2;
1 convolution kernel is having a size of 1*1, the convolutional layer that number is 128;
1 convolution kernel is having a size of 3*3, the convolutional layer that number is 256;
1 convolution kernel is having a size of 1*1, the convolutional layer that number is 256;
1 convolution kernel is having a size of 3*3, the convolutional layer that number is 512;
The maximum pond layer of 1 2*2;
4 convolution kernels are having a size of 1*1, the convolutional layer that number is 256;
4 convolution kernels are having a size of 3*3, the convolutional layer that number is 512;
1 convolution kernel is having a size of 1*1, the convolutional layer that number is 512;
1 convolution kernel is having a size of 3*3, the convolutional layer that number is 1024;
The maximum pond layer of 1 2*2;
2 convolution kernels are having a size of 1*1, the convolutional layer that number is 512;
2 convolution kernels are having a size of 3*3, the convolutional layer that number is 1024;
2 convolution kernels are having a size of 3*3, the convolutional layer that number is 1024;
2 convolution kernels are having a size of 3*3, the convolutional layer that number is 1024;
2 full articulamentums;
Pre-training is done to YOLO target detection model using the data of ImageNet database, it is each to obtain YOLO target detection model
Then the at the uniform velocity rotary body image that early period obtains is input to YOLO target detection model, constantly changed by the initial weight of layer
Generation training adjustment YOLO target detection model weight, until the loss function value of YOLO target detection model is less than given threshold value
0.2, each layer of YOLO target detection model at this time of weight is obtained, trained YOLO target detection model is obtained.
3. a kind of rotating speed measurement method of at the uniform velocity rotary body according to claim 2, which is characterized in that the early period obtains
The size of at the uniform velocity rotary body image be 448 × 448, amount of images is greater than 800, and has carried out label.
4. a kind of rotating speed measurement method of at the uniform velocity rotary body according to claim 3, which is characterized in that the step 3 two
Calculate the detailed process of at the uniform velocity rotary body actual translational velocity under camera coordinates system are as follows:
If coordinate of the ORB characteristic point of the (i-1)-th frame image under C coordinate system is (X, Y, Z) in set H, around C coordinate system three
The angular speed of a axis is (Ω1, Ω2, Ω3), the translational velocity of opposite three axis of C coordinate system is (TX,TY,TZ), by Eulerian equation
, (X, Y, Z) under C coordinate system puts the speed along X-axis, Y-axis and Z-direction are as follows:
With(X, Y, Z) respectively under C coordinate system puts the speed along X-axis, Y-axis and Z-direction;It isWithThe matrix of composition;
By camera model:
Wherein: fx,fy,cxAnd cyIt is the internal reference coefficient of camera, (u, v) is that (X, Y, Z) under C coordinate system is put under pixel coordinate system
Coordinate;
Above formula (2) and (3) derivation are obtained respectively:
Wherein: intermediate variableIntermediate variable
It willBring (4) and (5) formula into, arrangement can obtain:
Wherein:For the horizontal velocity of (u, v) under pixel coordinate system;The vertical speed for being (u, v) under pixel coordinate system;
Specifically ask method as follows:
Calculation method: the level of the ORB characteristic point to match in ORB characteristic point on the (i-1)-th frame image and the i-th frame image
The ratio of coordinate difference and the two field pictures time difference,Calculation method: ORB characteristic point on the (i-1)-th frame image and the i-th frame figure
The vertical coordinate difference of the ORB characteristic point to match on picture and the ratio of the two field pictures time difference;ORB on i-th frame image is special
The depth value of sign point is Z0, depth value directly read by the depth image of the i-th frame;
Formula (6) and (7) are organized into following matrix form:
Above-mentioned equation is denoted as AM=N,
Acquire matrix [Ω1 Ω2 Ω3 Tx Ty Tz]TAt least need 3 ORB characteristic points, [Ω1 Ω2 Ω3 Tx Ty Tz]T
It is the transposed matrix of M;
When ORB feature point number is 3, A has just constituted the matrix of a full rank, and formula (8) becomes:
Wherein: coordinate of the 3 ORB characteristic points of the (i-1)-th frame image under C coordinate system is respectively (X in set H1,Y1,Z1)、(X2,
Y2,Z2) and (X3,Y3,Z3), x1′,y1' it is point (X1,Y1,Z1) corresponding intermediate variable, x2′,y2' it is point (X2,Y2,Z2) corresponding
Intermediate variable, x3′,y3' it is point (X3,Y3,Z3) corresponding intermediate variable;
When ORB characteristic point is greater than 3, (8) become:
Wherein:It is horizontal velocity of the pixel (u1, v1) in pixel coordinate system,It is (u1, v1) perpendicular under pixel coordinate system
Straight speed;It is horizontal velocity of the pixel (u2, v2) in pixel coordinate system,It is (u2, v2) perpendicular under pixel coordinate system
Straight speed;It is horizontal velocity of the pixel (u3, v3) in pixel coordinate system,It is (u3, v3) perpendicular under pixel coordinate system
Straight speed;(u1, v1) is (X under C coordinate system1、Y1、Z1) coordinate of the point under pixel coordinate system;(u2, v2) is under C coordinate system
(X2、Y2、Z2) coordinate of the point under pixel coordinate system;(u3, v3) is (X under C coordinate system3、Y3、Z3) put in pixel coordinate system
Under coordinate;
M is solved using the pseudoinverse of matrix A, that is, finds out at the uniform velocity instantaneous translation speed of the rotary body in the i-th frame under C coordinate system
Degree, camera is before detecting next frame image with Tx, TyAnd TzSpeed at the uniform velocity move;
M=(ATA)-1ATN (11)
Wherein: ATIt is the transposed matrix of matrix A, (ATA)-1It is matrix ATWith the inverse matrix after matrix A product.
5. a kind of rotating speed measurement method of at the uniform velocity rotary body according to claim 4, which is characterized in that the step 3 three
Estimate spin matrix R of the C coordinate system of the i-th frame image in set H relative to C0 coordinate systemiWith the detailed process of translation matrix ti
Are as follows:
First frame image in set H is set as the first key frame, using C0 coordinate system as world coordinate system, according to the first key frame
ORB characteristic point in first key frame images is converted to the three-dimensional map under world coordinate system by the deep image information of image
Point, is put into local map;
The selection principle of next key frame: away from previous keyframe insertion already exceed the ORB characteristic point of 20 frames or present frame with
At least 50 points on point map successful match in local map;
ORB characteristic point in first key frame images is converted into the three-dimensional map point under world coordinate system, and is put into locally
After in figure, the point map in local map is matched with the ORB characteristic point of the 2nd frame image in set H, and updated locally
Figure;And so on, by the ORB characteristic point of the i-th frame image in set H and the ground after the (i-1)-th frame image update local map
Figure point is matched, and updates local map;
Updating principle is: giving a threshold value, when the point matched is greater than threshold value, deletes in local map not in present frame phase
Point map in the machine visual field;When the point matched is less than or equal to threshold value, all ORB characteristic points of present frame are converted into ground
Figure point, is put into local map;
For each key frame, then ORB characteristic point all in key frame is converted into point map deposit local map;
For the i-th frame image in set H, according to the ORB characteristic point of the point map matched and the i-th frame image, with the side 3D-2D
Method calculates the spin matrix R of the i-th frame imageiWith translation matrix ti。
6. a kind of rotating speed measurement method of at the uniform velocity rotary body according to claim 5, which is characterized in that the step 3 four
Detailed process are as follows:
The transformation matrix T of kth frame image in set Hk;
Wherein: RkIt is the corresponding spin matrix of kth frame image, t in set HkIt is the corresponding translation matrix of kth frame image;0 is one
0 matrix of a 3*1 dimension, 0TIt is the transposed matrix of 0 matrix;
By the C coordinate system of the 1st frame to the i-th frame image relative to C0The transformation matrix T of coordinate system1,T2,…TiWith it is every in local map
One point map OjRespectively as variable node optimised in factor graph, it is denoted as variable node P1, P2 ..., Pk ..., Pi respectively
And Q1, Q2 ..., Qj ..., Qm;J=1,2 ..., m, m are the total numbers of point map in the i-th frame local map;
It willAs the factor nodes in factor graph between variable node Pk and variable node Qj;
Wherein: (u '0、v′0) indicate kth frame image in the point map O in local mapjThe seat of the ORB characteristic point to match
Mark;(u0、v0) indicate local map on point map OjBy transformation matrix TkProject to the coordinate that pixel coordinate is fastened;||·|
|2It is two norms;
Point map O in local mapjBy transformation matrix TkProject to pixel coordinate system:
(X′j,Yj′,Z′j) indicate local map on point map OjThree-dimensional coordinate under world coordinate system;
According to the C coordinate system of the 1st frame to the i-th frame image relative to C0The transformation matrix T of coordinate system1,T2,…Ti, factor nodes, change
Measure each point map O in node and local mapj, a factor graph is established, rear end optimization is carried out to factor graph with g2o;
Spin matrix R of the C coordinate system of 1st to the i-th frame image relative to C0 coordinate system in set H after being optimizedk, translation square
Battle array tkWith the point map in local map, it is denoted as intermediate variable X*,
WhereinT1:iIt is T1,T2,…,TiThe matrix of composition, O1;mIt is O1,O2,…,OmComposition
Matrix.
7. a kind of rotating speed measurement method of at the uniform velocity rotary body according to claim 6, which is characterized in that the step 3 five
Detailed process are as follows:
Be detected frame selection principle: by step 3 four export it is optimised after the i-th frame image C coordinate system with respect to C0 coordinate
The spin matrix R of systemi, acquisition the 1st frame image to the i-th frame image used in the time, estimate rotational speed omegai, according to rotational speed omegaiEstimation
Occur winding time t, if the time used in the 1st frame image to the i-th frame image in the time range of 0.7t to 1.3t, the frame
Image is to be detected frame;
If the i-th frame after the value of the confidence of the color image detected is more than or equal to 0.85 is to be detected frame, it is based on bag of words
Carry out winding detection, process are as follows:
After step 2 has established the dictionary that k pitches several classes of types, using forward direction index and reverse indexing to any of detected frame image
Feature fiFind corresponding word w in dictionaryl, according to each word occur frequency utilization TF-IDF score, obtain with to
Measure the distribution of the detected frame image word list in dictionary indicated;
Calculate the similarity being detected between frame and first frame after scoring;
Assuming that first frame image uses vector v after scoring by TF-IDFAIndicate, be detected frame by after TF-IDF scoring with to
Measure vBIt indicates, then vector vAAnd vector vBThe similarity of corresponding two interframe are as follows:
Wherein: vAi′It is vAIn the i-th ' a element, vBi′It is vBIn the i-th ' a element, i '=1,2 ..., N ';|vAi′| and | vBi′
| it is v respectivelyAi′And vBi′Absolute value, | vAi′-vBi′| it is vAi′With vBi′Absolute value of the difference;N ' is the total characteristic for being detected frame
Number;
Given threshold value can detect winding, then the C coordinate system of the 1st to the i-th frame image of suboptimization if similarity is greater than threshold value
Relative to C0The spin matrix R of coordinate systemk, translation matrix tkWith the point map in local map;If similarity is less than or equal to threshold
Value, thens follow the steps four;
If the i-th frame is not to be detected frame, four are thened follow the steps;
0≤threshold value≤1;
The C coordinate system of the 1st to the i-th frame image of suboptimization again is relative to C0The spin matrix R of coordinate systemk, translation matrix tkWith
The process of point map in local map:
With | | TA-TK||2It is added between variable node P1 and variable node PK as new factor nodes, advanced optimizes the 1st
Frame to the i-th frame image C coordinate system relative to C0The spin matrix R of coordinate systemk, translation matrix tkWith the map in local map
Point;TAFor the transformation matrix of first frame image, TKTo find out the transformation square with the highest detected frame K of first frame image similarity
Battle array.
8. a kind of rotating speed measurement method of at the uniform velocity rotary body according to claim 7, which is characterized in that the step 4
Detailed process are as follows:
Rcb=Rcc0Rc0b (15)
Wherein: RcbRefer to spin matrix of the C coordinate system with respect to B coordinate system;Rcc0Refer to spin matrix of the C coordinate system with respect to C0 coordinate system;
Rc0bRefer to spin matrix of the C0 coordinate system with respect to B coordinate system;
C0 coordinate system without spin with respect to B coordinate system, soThat is Rcb=Rcc0, the C coordinate system of the i-th frame image
The spin matrix R of opposite B coordinate systemcbi=Ri;
The spin matrix R for every frame image that optimization is obtainedcbiIt is converted into the form of a shaft and a corner, this conversion is closed
System is indicated with Douglas Rodríguez equation:
Rcbi=cos θiI+(1-cosθi)nini T+sinθini ∧ (16)
RcbiIt is spin matrix of the C coordinate system with respect to B coordinate system of the i-th frame image in the set H of at the uniform velocity rotary body;θiIt is at the uniform velocity
Corner of the C coordinate system of the i-th frame image in the set H of rotary body with respect to B coordinate system;niIt is in the set H of at the uniform velocity rotary body
The i-th frame image the shaft that is rotated relative to B coordinate system of C coordinate system;I is three-dimensional unit matrix;ni TIt is niTransposed matrix;ni ∧It is niMultiplication cross matrix;n′、niThe element of " and n " ' be in vector,
Then:
tr(Rbci)=cos θitr(I)+(1-cosθi)tr(nini T)+sinθitr(ni ∧)
=3cos θi+(1-cosθi)
=1+2cos θi
So
Wherein: tr (Rbci) it is matrix RcbiMark, tr (I) is the mark of matrix I, tr (nini T) it is matrix nini TMark;tr(ni ∧)
It is matrix ni ∧Mark;
For detecting the image before winding for the first time, τ is usediIndicate the time used in the 1st frame image to the i-th frame image, then
The revolving speed measured when i frame is ωi=θi/τi;For being detected after winding for the first time to the figure detected before winding for the second time
Picture, the then revolving speed measured when the i-th frame are ωi=(θi+2π)/τi。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810615505.7A CN108982901B (en) | 2018-06-14 | 2018-06-14 | Method for measuring rotating speed of uniform-speed rotating body |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810615505.7A CN108982901B (en) | 2018-06-14 | 2018-06-14 | Method for measuring rotating speed of uniform-speed rotating body |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108982901A true CN108982901A (en) | 2018-12-11 |
CN108982901B CN108982901B (en) | 2020-06-09 |
Family
ID=64540470
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810615505.7A Active CN108982901B (en) | 2018-06-14 | 2018-06-14 | Method for measuring rotating speed of uniform-speed rotating body |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108982901B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109910014A (en) * | 2019-04-08 | 2019-06-21 | 上海嘉奥信息科技发展有限公司 | Robotic Hand-Eye Calibration method neural network based |
CN110207666A (en) * | 2019-04-11 | 2019-09-06 | 南京航空航天大学 | The vision pose measuring method and device of analog satellite on a kind of air floating platform |
CN110310331A (en) * | 2019-06-18 | 2019-10-08 | 哈尔滨工程大学 | A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature |
CN112486197A (en) * | 2020-12-05 | 2021-03-12 | 哈尔滨工程大学 | Fusion positioning tracking control method based on self-adaptive power selection of multi-source image |
CN112686128A (en) * | 2020-12-28 | 2021-04-20 | 南京览众智能科技有限公司 | Classroom desk detection method based on machine learning |
CN112686962A (en) * | 2021-01-21 | 2021-04-20 | 中国科学院空天信息创新研究院 | Indoor visual positioning method and device and electronic equipment |
CN112924708A (en) * | 2021-01-29 | 2021-06-08 | 中国航天空气动力技术研究院 | Speed estimation method suitable for underwater near-bottom operation aircraft |
CN113222891A (en) * | 2021-04-01 | 2021-08-06 | 东方电气集团东方锅炉股份有限公司 | Line laser-based binocular vision three-dimensional measurement method for rotating object |
CN114549592A (en) * | 2022-04-24 | 2022-05-27 | 之江实验室 | Trajectory prediction and capture method and device for non-cooperative projectile |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004132831A (en) * | 2002-10-10 | 2004-04-30 | Sumitomo Rubber Ind Ltd | Method and apparatus for measuring rotating and flying characteristics of sphere |
CN101882219A (en) * | 2009-05-08 | 2010-11-10 | 财团法人工业技术研究院 | Image identification and output method and system thereof |
CN106980895A (en) * | 2017-02-22 | 2017-07-25 | 中国科学院自动化研究所 | Convolutional neural networks Forecasting Methodology based on rotary area |
CN107563392A (en) * | 2017-09-07 | 2018-01-09 | 西安电子科技大学 | The YOLO object detection methods accelerated using OpenCL |
CN108053449A (en) * | 2017-12-25 | 2018-05-18 | 北京工业大学 | Three-dimensional rebuilding method, device and the binocular vision system of binocular vision system |
-
2018
- 2018-06-14 CN CN201810615505.7A patent/CN108982901B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004132831A (en) * | 2002-10-10 | 2004-04-30 | Sumitomo Rubber Ind Ltd | Method and apparatus for measuring rotating and flying characteristics of sphere |
CN101882219A (en) * | 2009-05-08 | 2010-11-10 | 财团法人工业技术研究院 | Image identification and output method and system thereof |
CN106980895A (en) * | 2017-02-22 | 2017-07-25 | 中国科学院自动化研究所 | Convolutional neural networks Forecasting Methodology based on rotary area |
CN107563392A (en) * | 2017-09-07 | 2018-01-09 | 西安电子科技大学 | The YOLO object detection methods accelerated using OpenCL |
CN108053449A (en) * | 2017-12-25 | 2018-05-18 | 北京工业大学 | Three-dimensional rebuilding method, device and the binocular vision system of binocular vision system |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109910014A (en) * | 2019-04-08 | 2019-06-21 | 上海嘉奥信息科技发展有限公司 | Robotic Hand-Eye Calibration method neural network based |
CN110207666A (en) * | 2019-04-11 | 2019-09-06 | 南京航空航天大学 | The vision pose measuring method and device of analog satellite on a kind of air floating platform |
CN110310331A (en) * | 2019-06-18 | 2019-10-08 | 哈尔滨工程大学 | A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature |
CN112486197A (en) * | 2020-12-05 | 2021-03-12 | 哈尔滨工程大学 | Fusion positioning tracking control method based on self-adaptive power selection of multi-source image |
CN112686128A (en) * | 2020-12-28 | 2021-04-20 | 南京览众智能科技有限公司 | Classroom desk detection method based on machine learning |
CN112686962A (en) * | 2021-01-21 | 2021-04-20 | 中国科学院空天信息创新研究院 | Indoor visual positioning method and device and electronic equipment |
CN112924708A (en) * | 2021-01-29 | 2021-06-08 | 中国航天空气动力技术研究院 | Speed estimation method suitable for underwater near-bottom operation aircraft |
CN113222891A (en) * | 2021-04-01 | 2021-08-06 | 东方电气集团东方锅炉股份有限公司 | Line laser-based binocular vision three-dimensional measurement method for rotating object |
CN113222891B (en) * | 2021-04-01 | 2023-12-22 | 东方电气集团东方锅炉股份有限公司 | Line laser-based binocular vision three-dimensional measurement method for rotating object |
CN114549592A (en) * | 2022-04-24 | 2022-05-27 | 之江实验室 | Trajectory prediction and capture method and device for non-cooperative projectile |
CN114549592B (en) * | 2022-04-24 | 2022-08-05 | 之江实验室 | Method and device for predicting and capturing trajectory of non-cooperative projectile |
Also Published As
Publication number | Publication date |
---|---|
CN108982901B (en) | 2020-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108982901A (en) | A kind of rotating speed measurement method of at the uniform velocity rotary body | |
Sirmacek et al. | Performance evaluation for 3-D city model generation of six different DSMs from air-and spaceborne sensors | |
CN110223348A (en) | Robot scene adaptive bit orientation estimation method based on RGB-D camera | |
CN110221603A (en) | A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud | |
CN110058237A (en) | InSAR point Yun Ronghe and three-dimensional deformation monitoring method towards High-resolution SAR Images | |
CN108734737A (en) | The method that view-based access control model SLAM estimation spaces rotate noncooperative target shaft | |
CN108986037A (en) | Monocular vision odometer localization method and positioning system based on semi-direct method | |
CN106548462B (en) | Non-linear SAR image geometric correction method based on thin-plate spline interpolation | |
CN109945856A (en) | Based on inertia/radar unmanned plane autonomous positioning and build drawing method | |
CN105760811B (en) | Global map closed loop matching process and device | |
CN107192350A (en) | A kind of three-dimensional laser scanner intrinsic parameter scaling method and device | |
CN105931234A (en) | Ground three-dimensional laser scanning point cloud and image fusion and registration method | |
CN105989604A (en) | Target object three-dimensional color point cloud generation method based on KINECT | |
CN105139379B (en) | Based on the progressive extracting method of classified and layered airborne Lidar points cloud building top surface | |
CN103559737A (en) | Object panorama modeling method | |
CN106228510B (en) | UAV system REAL TIME SAR IMAGES method for registering based on distortion degree segmentation | |
CN114419147A (en) | Rescue robot intelligent remote human-computer interaction control method and system | |
CN106023298A (en) | Point cloud rigid registration method based on local Poisson curved surface reconstruction | |
CN111781608A (en) | Moving target detection method and system based on FMCW laser radar | |
Zhang et al. | 3-D deep feature construction for mobile laser scanning point cloud registration | |
CN108154104A (en) | A kind of estimation method of human posture based on depth image super-pixel union feature | |
CN104318551B (en) | Gauss hybrid models point cloud registration method based on convex closure characteristic key | |
CN106526593A (en) | Sub-pixel-level corner reflector automatic positioning method based on SAR rigorous imaging model | |
CN116740288B (en) | Three-dimensional reconstruction method integrating laser radar and oblique photography | |
CN112068153A (en) | Crown clearance rate estimation method based on foundation laser radar point cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Bai Chengchao Inventor after: Guo Jifeng Inventor after: Zheng Hongxing Inventor after: Wang Liu Inventor before: Wang Liu Inventor before: Guo Jifeng Inventor before: Bai Chengchao Inventor before: Zheng Hongxing |
|
GR01 | Patent grant | ||
GR01 | Patent grant |