CN110516516A - Robot pose measurement method and device, electronic equipment, storage medium - Google Patents

Robot pose measurement method and device, electronic equipment, storage medium Download PDF

Info

Publication number
CN110516516A
CN110516516A CN201810494804.XA CN201810494804A CN110516516A CN 110516516 A CN110516516 A CN 110516516A CN 201810494804 A CN201810494804 A CN 201810494804A CN 110516516 A CN110516516 A CN 110516516A
Authority
CN
China
Prior art keywords
feature vector
default
point cluster
posture
characteristic point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810494804.XA
Other languages
Chinese (zh)
Inventor
黄玉玺
吴迪
李雨倩
董秋伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201810494804.XA priority Critical patent/CN110516516A/en
Publication of CN110516516A publication Critical patent/CN110516516A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Abstract

The disclosure is directed to a kind of robot pose measurement method and devices, electronic equipment, storage medium, are related to technical field of image processing, this method comprises: obtaining the corresponding first eigenvector of fisrt feature point cluster from present image;The first eigenvector and at least one default feature vector are compared, second feature vector is obtained;From the corresponding default characteristic point cluster of each default feature vector and preset posture, the second feature point cluster and label posture of the second feature vector are extracted;The fisrt feature point cluster and second feature point cluster are matched, characteristic point pair is obtained;A basis matrix is solved to the label posture using the characteristic point, to obtain the current pose of robot.The disclosure can reduce the measurement error of robot pose, to improve the accuracy rate of robot pose measurement.

Description

Robot pose measurement method and device, electronic equipment, storage medium
Technical field
This disclosure relates to technical field of image processing, in particular to a kind of robot pose measurement method, robot Attitude measuring, electronic equipment and computer readable storage medium.
Background technique
In robot technology, the measurement to robot pose is one of the key technology that robot operates normally.Due to Vision measurement technology has many advantages, such as that information that is cheap, obtaining is relatively abundant, the robot pose measurement of view-based access control model also at For the hot spot of research.
In the related technology, the algorithm of real-time estimation pose, such as ORB- can be mainly carried out using ORB image characteristic point SLAM (Oriented FAST and Rotated BRIEF, directionality FAST and rotation BRIEF) algorithm carries out robot pose Measurement;LSD-SLAM (a Line Segment Detector, line segment detection) algorithm measurement robot pose can also be used Deng.
But resolving robot pose by above-mentioned algorithm is a constantly accumulative process, i.e. the posture of present frame is It is resolved according to the posture of the either upper several frames of previous frame and present frame relative to the offset of previous frame or upper a few frames, also It is comparable to the process constantly integrated.In this process, error is constantly accumulative and cannot be eliminated, therefore with The growth of time of measuring or distance, measurement error also can be increasing, so as to cause measurement robot pose accuracy rate compared with It is low.
It should be noted that information is only used for reinforcing the reason to the background of the disclosure disclosed in above-mentioned background technology part Solution, therefore may include the information not constituted to the prior art known to persons of ordinary skill in the art.
Summary of the invention
The disclosure is designed to provide a kind of robot pose measurement method and device, electronic equipment, storage medium, into And overcome caused by the limitation and defect due to the relevant technologies at least to a certain extent robot pose measurement error greatly with And the problem that accuracy rate is low.
Other characteristics and advantages of the disclosure will be apparent from by the following detailed description, or partially by the disclosure Practice and acquistion.
According to one aspect of the disclosure, a kind of robot pose measurement method is provided, comprising: obtain from present image The corresponding first eigenvector of fisrt feature point cluster;The first eigenvector and at least one default feature vector are carried out pair Than obtaining second feature vector;From the corresponding default characteristic point cluster of each default feature vector and preset posture, described in extraction The second feature point cluster and label posture of second feature vector;By the fisrt feature point cluster and second feature point cluster into Row matching, obtains characteristic point pair;A basis matrix is solved to the label posture using the characteristic point, to obtain robot Current pose.
In a kind of exemplary embodiment of the disclosure, the method also includes: by image capture device obtain to When a few template image, synchronously control pre-set navigational equipment acquires the preset posture;Each template image is carried out special Sign is extracted, to obtain the default characteristic point cluster of each template image;By the default feature of each template image Point cluster and the preset posture are stored to template library;According to the default characteristic point cluster in the template library and described pre- If posture, the corresponding default feature vector of each default characteristic point cluster is obtained.
In a kind of exemplary embodiment of the disclosure, by the first eigenvector and at least one default feature vector It compares, obtaining second feature vector includes: to calculate at least one of the first eigenvector and the template library institute State the similarity between default feature vector;It will be true with the maximum default feature vector of the first eigenvector similarity It is set to the second feature vector.
In a kind of exemplary embodiment of the disclosure, from the corresponding default characteristic point cluster of each default feature vector and pre- If in posture, extract the second feature vector second feature point cluster and label posture include: by with the fisrt feature The corresponding default characteristic point cluster of the maximum default feature vector of vector similarity is determined as the second feature vector The second feature point cluster;It will be corresponding with the maximum default feature vector of the first eigenvector similarity described Preset posture is determined as the label posture of the second feature vector.
In a kind of exemplary embodiment of the disclosure, the calculation formula of the similarity includes:
Wherein, S is similarity, | VC| it is the length of the first eigenvector, | V'| is the length of the default feature vector, | VC- V'| represents the first eigenvector and the default feature vector Difference.
In a kind of exemplary embodiment of the disclosure, a basis is solved to the label posture using the characteristic point Matrix includes: using the characteristic point pair to pole the constraint relationship to obtain the current pose of robot, using preset algorithm from Spin matrix of the present image relative to the template image is decomposited in the basis matrix;Pass through the spin matrix And the label posture obtains the current pose of the robot.
In a kind of exemplary embodiment of the disclosure, the calculation formula of the current pose includes: RC=RM× R, In, RCFor the current pose, RMFor the label posture, R is the spin matrix.
In a kind of exemplary embodiment of the disclosure, the preset algorithm includes five-spot or 8 methods.
In a kind of exemplary embodiment of the disclosure, the pre-set navigational equipment includes high-precision navigation equipment, described Image capture device includes vehicle-mounted monocular camera.
According to one aspect of the disclosure, a kind of robot pose measurement device is provided, comprising: vector obtains module, uses In the corresponding first eigenvector of acquisition fisrt feature point cluster from present image;Vector contrast module is used for described first Feature vector is compared at least one default feature vector, obtains second feature vector;Characteristic extracting module is used for from each In the default corresponding default characteristic point cluster of feature vector and preset posture, the second feature point of the second feature vector is extracted Cluster and label posture;Characteristic matching module, for the fisrt feature point cluster and second feature point cluster to be matched, Obtain characteristic point pair;Posture obtains module, for solving a basis matrix to the label posture using the characteristic point, with Obtain the current pose of robot.
According to one aspect of the disclosure, a kind of electronic equipment is provided, comprising: processor;And memory, for storing The executable instruction of the processor;Wherein, the processor is configured to above-mentioned to execute via the executable instruction is executed Robot pose measurement method described in any one.
According to one aspect of the disclosure, a kind of computer readable storage medium is provided, computer program is stored thereon with, The computer program realizes robot pose measurement method described in above-mentioned any one when being executed by processor.
A kind of robot pose measurement method for being there is provided in disclosure exemplary embodiment, robot pose measurement device, In electronic equipment and computer readable storage medium, by by the corresponding first eigenvector of fisrt feature point cluster and default spy Sign vector compares, and second feature point cluster corresponding with second feature vector and label posture is obtained, then to fisrt feature Point cluster and second feature point cluster carry out matching and obtain characteristic point pair, further obtain robot to label posture using characteristic point Current pose, on the one hand, by the way that first eigenvector and default feature vector are compared, with obtain characteristic point to and The label posture of two feature vectors obtains the current pose of robot, and error is constantly tired during avoiding robot pose measurement The problem of meter, to reduce measurement error;On the other hand, due to reducing measurement error, robot pose measurement is improved Accuracy rate.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure Example, and together with specification for explaining the principles of this disclosure.It should be evident that the accompanying drawings in the following description is only the disclosure Some embodiments for those of ordinary skill in the art without creative efforts, can also basis These attached drawings obtain other attached drawings.
Fig. 1 schematically shows a kind of robot pose measurement method schematic diagram in disclosure exemplary embodiment;
Fig. 2 schematically shows the flow chart that template library is established in disclosure exemplary embodiment;
Fig. 3 schematically shows the flow chart that robot pose calculates in disclosure exemplary embodiment;
Fig. 4 schematically shows a kind of block diagram of robot pose measurement device in disclosure exemplary embodiment;
Fig. 5 schematically shows the block diagram of a kind of electronic equipment in disclosure exemplary embodiment;
Fig. 6 schematically shows a kind of program product in disclosure exemplary embodiment.
Specific embodiment
Example embodiment is described more fully with reference to the drawings.However, example embodiment can be with a variety of shapes Formula is implemented, and is not understood as limited to example set forth herein;On the contrary, thesing embodiments are provided so that the disclosure will more Fully and completely, and by the design of example embodiment comprehensively it is communicated to those skilled in the art.Described feature, knot Structure or characteristic can be incorporated in any suitable manner in one or more embodiments.In the following description, it provides perhaps More details fully understand embodiment of the present disclosure to provide.It will be appreciated, however, by one skilled in the art that can It is omitted with technical solution of the disclosure one or more in the specific detail, or others side can be used Method, constituent element, device, step etc..In other cases, be not shown in detail or describe known solution to avoid a presumptuous guest usurps the role of the host and So that all aspects of this disclosure thicken.
In addition, attached drawing is only the schematic illustrations of the disclosure, it is not necessarily drawn to scale.Identical attached drawing mark in figure Note indicates same or similar part, thus will omit repetition thereof.Some block diagrams shown in the drawings are function Energy entity, not necessarily must be corresponding with physically or logically independent entity.These function can be realized using software form Energy entity, or these functional entitys are realized in one or more hardware modules or integrated circuit, or at heterogeneous networks and/or place These functional entitys are realized in reason device device and/or microcontroller device.
A kind of robot pose measurement method is provided firstly in this example embodiment, refering to what is shown in Fig. 1, the robot Attitude measurement method may comprise steps of:
In step s 110, the corresponding first eigenvector of fisrt feature point cluster is obtained from present image;
In the step s 120, the first eigenvector and at least one default feature vector are compared, obtains the Two feature vectors;
In step s 130, from the corresponding default characteristic point cluster of each default feature vector and preset posture, institute is extracted State the second feature point cluster and label posture of second feature vector;
In step S140, the fisrt feature point cluster and second feature point cluster are matched, obtain characteristic point It is right;
In step S150, a basis matrix is solved to the label posture using the characteristic point, to obtain machine The current pose of people.
In the robot pose measurement method provided in the present example embodiment, on the one hand, by by fisrt feature to Amount is compared with default feature vector, with obtain characteristic point to and the label posture acquisition robot of second feature vector work as Preceding posture avoids the constantly accumulative problem of error during robot pose measurement, to reduce measurement error;Another party Face improves the accuracy rate of robot pose measurement due to reducing measurement error.
Next, the robot pose measurement method in the present exemplary embodiment is further expalined in conjunction with attached drawing It is bright.
In step s 110, the corresponding first eigenvector of fisrt feature point cluster is obtained from present image.
In this example, what present image referred to obtaining in real time by image capture device needs to run current on section Frame image, image capture device for example can be vehicle-mounted monocular camera.Before obtaining present image, need to acquisition robot The monocular camera of image information carries out calibration of camera.Wherein, the inner parameter that monocular camera need to be demarcated is including may include Coordinate, the lens distortion parameter etc. of the pixel number in the direction x, y, projection plane center in pixel coordinate system in pixel coordinate system. Camera calibration can be completed using the camera calibration kit in the softwares such as Matlab or OpenCV, to obtain list as described above Mesh camera internal parameter.
Fisrt feature point cluster refers to multiple characteristic points corresponding with the current frame image obtained.Specifically, can go The section acquisition that will be run includes the present image of robot, next can by suitable feature extraction algorithm, according to Different scenes obtain multiple and different features, such as can pass through SIFT (Scale-invariant Feature Transform, scale invariant feature conversion) algorithm, SURF (Speeded Up Robust Features, acceleration robustness spy Sign) algorithm or ORB algorithm extract the feature of present image.For example, for present image A, can lead in scene 1 It crosses ORB algorithm and obtains feature 1, feature 2 can be obtained by SURF algorithm in scene 2, feature 1, the feature 2 of extraction are then constituted Fisrt feature point cluster.
Next, the corresponding first eigenvector of fisrt feature point cluster can be calculated.Wherein, each first eigenvector A characteristic point is represented, different first eigenvectors corresponds to different features.It should be noted that can by fisrt feature to The columns N of amount is set as any number, is illustrated so that columns is 10 as an example in this example.For example, can with V=[0,0,0, 0,1,0,1,1,1,1] feature 1 is represented, represents feature 2 with V=[0,0,1,0,0,1,1,1,1,1].
Specifically, the present image P of captured in real-time can be obtained from monocular cameraC, further can be by not With feature extraction algorithm from present image PCMiddle K feature of extraction, thus constitutive characteristic point cluster FC, next, spy can be calculated Sign point cluster FCCorresponding first eigenvector VC
In the step s 120, the first eigenvector and at least one default feature vector are compared, obtains the Two feature vectors.
In this example, default feature vector is the corresponding spy of all template images stored in prior established template library Vector is levied, i.e. each in template library presets the corresponding feature vector of characteristic point cluster, can be used for carrying out similarity mode.Cause This, before being compared to first eigenvector and default feature vector, it is necessary first to determine default feature vector.It determines pre- If the process of feature vector specifically includes: when obtaining at least one template image by image capture device, synchronously control is pre- If navigation equipment acquires the preset posture;Feature extraction is carried out to each template image, to obtain each template image The default characteristic point cluster;The default characteristic point cluster of each template image and the preset posture are stored to mould Plate library;According in the template library the default characteristic point cluster and the preset posture, obtain each default characteristic point The corresponding default feature vector of cluster.
Template image refers to the multiple images shot in advance using image capture device, wherein image capture device and step The equipment that present image is obtained in rapid S110 is identical, such as can be vehicle-mounted monocular camera.It is first when carrying out template image acquisition Vehicle-mounted monocular camera and pre-set navigational equipment HPNE are first subjected to time synchronization, to guarantee the consistency of Image Acquisition.Next, It, can be with synchronously control pre-set navigational equipment HPNE when going to the section to be run to obtain template image by vehicle-mounted monocular camera Acquire preset posture.Then feature extraction can be carried out to template image using suitable feature extraction algorithm, it is a large amount of to obtain The default characteristic point cluster being made of characteristic point;It at the same time, can be the default characteristic point cluster acquired on each template image The preset posture measured at this time by HPNE in addition, for example, for the default feature extracted from the template image that the T1 moment acquires Cluster F1 is put, the preset posture R1 of T1 moment HPNE measurement in addition.Next, can will be associated pre- with each template image If characteristic point cluster and same time are stored by the preset posture that HPNE is measured to template library;It on this basis, can be according to mould The default characteristic point cluster and preset posture stored in plate library obtains the corresponding default feature vector of each default characteristic point cluster.
In this example, by establishing template library in advance, to store default characteristic point cluster, preset posture and each default spy The corresponding default feature vector of sign point cluster, can ask in this way to avoid what error during Attitude Calculation was constantly accumulated Topic, so as to reduce measurement error.
After determining default feature vector, first eigenvector can be preset with from the multiple of template library extraction Feature vector compares, and obtains and second feature vector.It specifically includes: calculating the first eigenvector and the template library At least one of described in preset feature vector between similarity;It will be maximum described with the first eigenvector similarity Default feature vector is determined as the second feature vector.
That is, can extract from template library and either add after the first eigenvector for obtaining present image Carry the corresponding default feature vector of all default characteristic point clusters of storage, and to the first eigenvector obtained in step S110 with All default feature vectors carry out similarity calculation, and will be in the maximum default feature vector conduct of first eigenvector similarity Second feature vector obtained in step S120.
It should be noted that similarity meter can be carried out to first eigenvector and default feature vector using formula (1) It calculates:
Wherein, S is similarity, | VC| it is the length of the first eigenvector, V' is the length of the default feature vector Degree, | VC- V'| represents the difference of the first eigenvector Yu the default feature vector.According to above formula it can be concluded that, phase It is bigger like spending, it may be considered that two cluster features are more close.It, can be more accurately from mould by calculating formula of similarity in this example Template image similar with present image is obtained in plate library, so as to improve the accurate of attitude measurement based on the template image Rate.
Next, in step s 130, from the corresponding default characteristic point cluster of each default feature vector and preset posture, Extract the second feature point cluster and label posture of the second feature vector.
It is determining with the highest second feature vector of first eigenvector similarity after, available and second feature to Measure corresponding second feature point cluster and label posture corresponding with second feature vector.
Wherein, second feature point cluster refers to and first eigenvector VCThe maximum default feature vector of similarity is corresponding Default characteristic point cluster.For example, default feature vector may include V1 ', V2 ' and V3 ', corresponding default characteristic point cluster Respectively F1 ', F2 ' and F3 '.If with first eigenvector VCMost like default feature vector is V1 ', then can will be pre- If feature vector V1 ' is determined as second feature vector, while the corresponding default characteristic point cluster F1 ' of default feature vector V1 ' being made For the corresponding second feature point cluster F of second feature vectorM.Label posture is when referring to establishing template library in advance, with the first spy Levy vector VCThe preset posture marked on the corresponding default characteristic point cluster of the maximum default feature vector of similarity.
For example, the default appearance that template image P ' and the HPNE measurement after can store time synchronization in template library obtains The characteristic point cluster F ' and the corresponding default feature vector of each characteristic point cluster that the corresponding feature of state R ', template image P ' is formed V’。
In actual use, present image P can be obtained from cameraCAnd extract characteristic point cluster FC, to calculate fisrt feature Vector VC;Next first eigenvector V can be calculated using formula (1)CWith each the default feature stored from template library Similarity between vector V '.For example, first eigenvector VC=[1,1,1,1,1,0,0,0,0,0], a default feature to It measures V'=[1,0,1,0,1,0,1,0,1,0], when calculating similarity using formula (1), VCValue be 5, two vectors correspond to position Subtract each other and seeks absolute value VCThe value of-V' is 4, and the value of the similarity S of calculating is 0.6.It should be noted that due to every in practical application One width present image can extract at least one feature, therefore the case where feature vector is zero is not considered in this example.
If the similarity of default feature vector V1 ', V2 ' and V3 ' and first eigenvector are respectively 0.5,0.6, 0.8, then default feature vector V3 ' can be determined as second feature vector.It should be noted that first eigenvector and default The control methods of feature vector can be not particularly limited herein according to different features and different Environment Designs.
Next, can calculate and first eigenvector VCThe corresponding second feature of the highest second feature vector of similarity Point cluster FMAnd corresponding label posture RM;Default feature corresponding with default feature vector V3 ' can be obtained from template library Point cluster and preset posture, and this is preset into characteristic point cluster as the point cluster of second feature described in step S130, by default appearance State marks posture as described in step S130.
Further, in step S140, the fisrt feature point cluster and second feature point cluster is matched, obtained Take characteristic point pair.
For each of present image fisrt feature point cluster, by the characteristic information of the fisrt feature point cluster respectively with mould The characteristic information of each of plate image second feature point cluster is matched, if successful match, by the fisrt feature point cluster A characteristic point pair is constituted with the second feature point cluster.The characteristic information of characteristic point cluster may include feature in the embodiment of the present invention The pixel value of point cluster.If some fisrt feature point cluster F of present imageCWith the second feature point cluster F of template imageMPixel Value is consistent, then fisrt feature point cluster FCWith second feature point cluster FMConstitute a characteristic point pair.Wherein, characteristic point is to can be with For the one-to-one mathematical function of cluster, concrete form can be configured according to actual needs.By to fisrt feature point Cluster and second feature point cluster are matched, and to form characteristic point pair, can be obtained more accurate characteristic point pair, be improved precision.
In step S150, a basis matrix is solved to the label posture using the characteristic point, to obtain machine The current pose of people.
There are important geometrical-restriction relations in this example, between the two images of the Same Scene obtained from different perspectives i.e. For that can be indicated with 3 × 3 singular matrix that an order is 2, which is basis matrix to pole the constraint relationship.It can To solve basis matrix to pole the constraint relationship to formation using characteristic point, determined between two images by constraining pole When corresponding relationship, search range can reduce.
Specifically, can use characteristic point pair to pole the constraint relationship, decomposed from basis matrix using preset algorithm Spin matrix of the present image relative to template image out.Wherein, preset algorithm for example can be five-spot or 8 methods, can also To include 7 methods or other algorithms, it is illustrated by taking five-spot and 8 methods as an example in this example.The basic think of of five-spot Want to be, when the movement between two images is pure translational movement, given 5 pairs of image corresponding points then can linearly determine essential square Battle array, is then converted using the internal relation with basis matrix, then can determine basis matrix.Base is being calculated by 8 methods When plinth matrix, in order to improve the stability and precision of solution, often the coordinate of input point set is first normalized.It connects down Coming can be by asking linear solution and singularity to constrain the fundamental matrix that step solves Corresponding matching.Can use Matlab or OpenCV function solves basis matrix by any one in five-spot either 8 methods, to decomposite from basis matrix Spin matrix of the present image relative to template image, wherein spin matrix refers to when multiplied by a vector, only change to The direction of amount but do not change vector magnitude effect matrix.Next can to calculate present image using formula (2) corresponding Robot current pose.Robot pose herein refer to robot relative to world coordinate system pitch angle, yaw Angle and roll angle.Shown in the calculation formula of current pose such as formula (2):
RC=RM× R, (2)
Wherein, RCFor the current pose of the robot of calculating, RMFor the label posture, R is the spin matrix.
For example, by fisrt feature point cluster FCWith second feature point cluster FMIn characteristic point matched, obtain cluster one One corresponding characteristic point is to Fp;Next to Fp and basis matrix E is acquired to pole the constraint relationship by characteristic point;It further uses 8 methods or five-spot decomposite the corresponding fisrt feature point cluster F of present imageCRelative to the corresponding second feature of template image Point cluster FMSpin matrix R;Finally, the current pose R of robot in present image can be calculated according to formula (2)C
In this example, in actual use, it is only necessary to present image be obtained by vehicle-mounted monocular camera, can be obtained Robot current pose, that is to say, that the current pose of robot, which calculates, only just can solve by monocular vision, without leading in high precision Boat equipment, reduces overall plan cost, improves cost performance.In addition to this, by solving basis matrix, it is opposite to obtain present image In the spin matrix of the template image most like with present image, and then pass through spin matrix and the corresponding mark of the template image Remember Attitude Calculation current pose, improves the accuracy rate of robot pose calculating.
The flow chart in acquisition module library is shown in Fig. 2, refering to what is shown in Fig. 2, its process specifically includes:
In step s 201, start pre-set navigational equipment, pre-set navigational equipment for example can be high-precision navigation equipment HPNE;
In step S202, while starting pre-set navigational equipment, camera is opened, camera herein for example can be vehicle It carries monocular camera or is that by the camera of same technique effect;
In step S203, time synchronization is set by pre-set navigational equipment and vehicle-mounted monocular camera;
In step S204, acquisition time synchronize after the obtained preset posture R ' of image P ' and HPNE measurement;
In step 205, the corresponding feature F ' of image P ' is extracted;
In step 206, the preset posture R ' that storage feature F ' and HPNE measurement obtains, to form template library;
In step 207, control robot is in moving condition.
The flow chart of robot pose calculating is shown in Fig. 3, process specifically includes:
In step S301, the characteristic point cluster F ' and preset posture R ' that are stored in the template library that is formed in loading figure 2;
In step s 302, the corresponding feature vector V ' of each characteristic point cluster is calculated, to be used for feature cluster similarity Match;
In step S303, only starts monocular camera, do not start pre-set navigational equipment;
In step s 304, image P is obtained from cameraC
In step S305, characteristic point cluster F is extractedC
In step S306, feature vector V is calculatedC
In step S307, by being compared to feature vector, obtain in template library with characteristic point cluster FCMost like Default characteristic point cluster FMAnd corresponding preset posture RM, and as the corresponding second feature point cluster of second feature vector with And label posture;
In step S308, characteristic matching is carried out;Such as fisrt feature point cluster and the second feature point cluster of acquisition are carried out Matching;
In step S309, basis matrix E is solved;Basis matrix is solved using five-spot either 8 methods;
In step s310, decomposition base matrix obtains R;
In step S311, pass through the current label posture R of formula calculator device peopleC=RM×R。
After calculating label posture, continue the robot pose calculating that return step S304 carries out next stage.
Characteristic point cluster, default appearance are preset to store by establishing template library in advance with process shown in Fig. 3 according to fig. 2 State and the corresponding default feature vector of each default characteristic point cluster, can to avoid in robot current pose calculating process not The problem of disconnected accumulated error, so as to reduce measurement error, improve the accuracy rate of robot pose measurement.
The disclosure additionally provides a kind of robot pose measurement device.Refering to what is shown in Fig. 4, the robot pose measurement device 400 may include:
Vector obtains module 401, can be used for obtaining from present image the corresponding fisrt feature of fisrt feature point cluster to Amount;
Vector contrast module 402 can be used for carrying out the first eigenvector and at least one default feature vector Comparison, obtains second feature vector;
Characteristic extracting module 403 can be used for from each default corresponding default characteristic point cluster of feature vector and default appearance In state, the second feature point cluster and label posture of the second feature vector are extracted;
Characteristic matching module 404 can be used for matching the fisrt feature point cluster and second feature point cluster, Obtain characteristic point pair;
Posture obtains module 405, can be used for solving a basis matrix to the label posture using the characteristic point, To obtain the current pose of robot.
It should be noted that the detail of each module is in corresponding machine in above-mentioned robot pose measurement device It is described in detail in people's attitude measurement method, therefore details are not described herein again.
It should be noted that although being referred to several modules or list for acting the equipment executed in the above detailed description Member, but this division is not enforceable.In fact, according to embodiment of the present disclosure, it is above-described two or more Module or the feature and function of unit can embody in a module or unit.Conversely, an above-described mould The feature and function of block or unit can be to be embodied by multiple modules or unit with further division.
In addition, although describing each step of method in the disclosure in the accompanying drawings with particular order, this does not really want These steps must be executed in this particular order by asking or implying, or having to carry out step shown in whole could realize Desired result.Additional or alternative, it is convenient to omit multiple steps are merged into a step and executed by certain steps, and/ Or a step is decomposed into execution of multiple steps etc..
Through the above description of the embodiments, those skilled in the art is it can be readily appreciated that example described herein is implemented Mode can also be realized by software realization in such a way that software is in conjunction with necessary hardware.Therefore, according to the disclosure The technical solution of embodiment can be embodied in the form of software products, which can store non-volatile at one Property storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) in or network on, including some instructions are so that a calculating Equipment (can be personal computer, server, mobile terminal or network equipment etc.) is executed according to disclosure embodiment Method.
In an exemplary embodiment of the disclosure, a kind of electronic equipment that can be realized the above method is additionally provided.
Person of ordinary skill in the field it is understood that various aspects of the invention can be implemented as system, method or Program product.Therefore, various aspects of the invention can be embodied in the following forms, it may be assumed that complete hardware embodiment, complete The embodiment combined in terms of full Software Implementation (including firmware, microcode etc.) or hardware and software, can unite here Referred to as circuit, " module " or " system ".
The electronic equipment 500 of this embodiment according to the present invention is described referring to Fig. 5.The electronics that Fig. 5 is shown Equipment 500 is only an example, should not function to the embodiment of the present invention and use scope bring any restrictions.
As shown in figure 5, electronic equipment 500 is showed in the form of universal computing device.The component of electronic equipment 500 can wrap It includes but is not limited to: at least one above-mentioned processing unit 510, at least one above-mentioned storage unit 520, the different system components of connection The bus 530 of (including storage unit 520 and processing unit 510).
Wherein, the storage unit is stored with program code, and said program code can be held by the processing unit 510 Row, so that various according to the present invention described in the execution of the processing unit 510 above-mentioned " illustrative methods " part of this specification The step of illustrative embodiments.For example, the processing unit 510 can execute step as shown in fig. 1: in step S110 In, the corresponding first eigenvector of fisrt feature point cluster is obtained from present image;In the step s 120, by the fisrt feature Vector is compared at least one default feature vector, obtains second feature vector;In step s 130, from each default feature In the corresponding default characteristic point cluster of vector and preset posture, the second feature point cluster and mark of the second feature vector are extracted Remember posture;In step S140, the fisrt feature point cluster and second feature point cluster are matched, obtain characteristic point It is right;In step S150, a basis matrix is solved to the label posture using the characteristic point, to obtain working as robot Preceding posture.
Storage unit 520 may include the readable medium of volatile memory cell form, such as Random Access Storage Unit (RAM) 5201 and/or cache memory unit 5202, it can further include read-only memory unit (ROM) 5203.
Storage unit 520 can also include program/utility with one group of (at least one) program module 5205 5204, such program module 5205 includes but is not limited to: operating system, one or more application program, other program moulds It may include the realization of network environment in block and program data, each of these examples or certain combination.
Bus 530 can be to indicate one of a few class bus structures or a variety of, including storage unit bus or storage Cell controller, peripheral bus, graphics acceleration port, processing unit use any bus structures in a variety of bus structures Local bus.
Electronic equipment 500 can also be with one or more external equipments 600 (such as keyboard, sensing equipment, bluetooth equipment Deng) communication, can also be enabled a user to one or more equipment interact with the electronic equipment 500 communicate, and/or with make Any equipment (such as the router, modulation /demodulation that the electronic equipment 500 can be communicated with one or more of the other calculating equipment Device etc.) communication.This communication can be carried out by input/output (I/O) interface 550.Also, electronic equipment 500 can be with By network adapter 560 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public network, Such as internet) communication.As shown, network adapter 560 is communicated by bus 530 with other modules of electronic equipment 500. It should be understood that although not shown in the drawings, other hardware and/or software module can not used in conjunction with electronic equipment 500, including but not Be limited to: microcode, device driver, redundant processing unit, external disk drive array, RAID system, tape drive and Data backup storage system etc..
Through the above description of the embodiments, those skilled in the art is it can be readily appreciated that example described herein is implemented Mode can also be realized by software realization in such a way that software is in conjunction with necessary hardware.Therefore, according to the disclosure The technical solution of embodiment can be embodied in the form of software products, which can store non-volatile at one Property storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) in or network on, including some instructions are so that a calculating Equipment (can be personal computer, server, terminal installation or network equipment etc.) is executed according to disclosure embodiment Method.
In an exemplary embodiment of the disclosure, a kind of computer readable storage medium is additionally provided, energy is stored thereon with Enough realize the program product of this specification above method.In some possible embodiments, various aspects of the invention may be used also In the form of being embodied as a kind of program product comprising program code, when described program product is run on the terminal device, institute Program code is stated for executing the terminal device described in above-mentioned " illustrative methods " part of this specification according to this hair The step of bright various illustrative embodiments.
Refering to what is shown in Fig. 6, describing the program product for realizing the above method of embodiment according to the present invention 700, can using portable compact disc read only memory (CD-ROM) and including program code, and can in terminal device, Such as it is run on PC.However, program product of the invention is without being limited thereto, in this document, readable storage medium storing program for executing can be with To be any include or the tangible medium of storage program, the program can be commanded execution system, device or device use or It is in connection.
Described program product can be using any combination of one or more readable mediums.Readable medium can be readable letter Number medium or readable storage medium storing program for executing.Readable storage medium storing program for executing for example can be but be not limited to electricity, magnetic, optical, electromagnetic, infrared ray or System, device or the device of semiconductor, or any above combination.The more specific example of readable storage medium storing program for executing is (non exhaustive List) include: electrical connection with one or more conducting wires, portable disc, hard disk, random access memory (RAM), read-only Memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read only memory (CD-ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, In carry readable program code.The data-signal of this propagation can take various forms, including but not limited to electromagnetic signal, Optical signal or above-mentioned any appropriate combination.Readable signal medium can also be any readable Jie other than readable storage medium storing program for executing Matter, the readable medium can send, propagate or transmit for by instruction execution system, device or device use or and its The program of combined use.
The program code for including on readable medium can transmit with any suitable medium, including but not limited to wirelessly, have Line, optical cable, RF etc. or above-mentioned any appropriate combination.
The program for executing operation of the present invention can be write with any combination of one or more programming languages Code, described program design language include object oriented program language-Java, C++ etc., further include conventional Procedural programming language-such as " C " language or similar programming language.Program code can be fully in user It calculates and executes in equipment, partly executes on a user device, being executed as an independent software package, partially in user's calculating Upper side point is executed on a remote computing or is executed in remote computing device or server completely.It is being related to far Journey calculates in the situation of equipment, and remote computing device can pass through the network of any kind, including local area network (LAN) or wide area network (WAN), it is connected to user calculating equipment, or, it may be connected to external computing device (such as utilize ISP To be connected by internet).
In addition, above-mentioned attached drawing is only the schematic theory of processing included by method according to an exemplary embodiment of the present invention It is bright, rather than limit purpose.It can be readily appreciated that the time that above-mentioned processing shown in the drawings did not indicated or limited these processing is suitable Sequence.In addition, be also easy to understand, these processing, which can be, for example either synchronously or asynchronously to be executed in multiple modules.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure His embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Adaptive change follow the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure or Conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by claim It points out.

Claims (12)

1. a kind of robot pose measurement method characterized by comprising
The corresponding first eigenvector of fisrt feature point cluster is obtained from present image;
The first eigenvector and at least one default feature vector are compared, second feature vector is obtained;
From the corresponding default characteristic point cluster of each default feature vector and preset posture, the of the second feature vector is extracted Two characteristic point clusters and label posture;
The fisrt feature point cluster and second feature point cluster are matched, characteristic point pair is obtained;
A basis matrix is solved to the label posture using the characteristic point, to obtain the current pose of robot.
2. robot pose measurement method according to claim 1, which is characterized in that the method also includes:
When obtaining at least one template image by image capture device, the acquisition of synchronously control pre-set navigational equipment is described default Posture;
Feature extraction is carried out to each template image, to obtain the default characteristic point cluster of each template image;
The default characteristic point cluster of each template image and the preset posture are stored to template library;
According in the template library the default characteristic point cluster and the preset posture, obtain each default characteristic point cluster The corresponding default feature vector.
3. robot pose measurement method according to claim 2, which is characterized in that by the first eigenvector and extremely A few default feature vector compares, and obtaining second feature vector includes:
Calculate the similarity preset between feature vector described at least one of the first eigenvector and the template library;
It will be determined as the second feature vector with the maximum default feature vector of the first eigenvector similarity.
4. robot pose measurement method according to claim 3, which is characterized in that corresponding from each default feature vector In default characteristic point cluster and preset posture, the second feature point cluster and label posture packet of the second feature vector are extracted It includes:
Will the default characteristic point cluster corresponding with the maximum default feature vector of the first eigenvector similarity it is true It is set to the second feature point cluster of the second feature vector;
The preset posture corresponding with the maximum default feature vector of the first eigenvector similarity is determined as The label posture of the second feature vector.
5. robot pose measurement method according to claim 3, which is characterized in that the calculation formula packet of the similarity It includes:
Wherein, S is similarity, | VC| it is the length of the first eigenvector, | V'| is the length of the default feature vector, | VC- V'| represents the difference of the first eigenvector Yu the default feature vector.
6. robot pose measurement method according to claim 2, which is characterized in that using the characteristic point to it is described It marks posture to solve a basis matrix, includes: to obtain the current pose of robot
Using the characteristic point pair to pole the constraint relationship, decomposited from the basis matrix using preset algorithm described current Spin matrix of the image relative to the template image;
The current pose of the robot is obtained by the spin matrix and the label posture.
7. robot pose measurement method according to claim 6, which is characterized in that the calculation formula of the current pose Include:
RC=RM× R, wherein RCFor the current pose, RMFor the label posture, R is the spin matrix.
8. robot pose measurement method according to claim 6, which is characterized in that the preset algorithm includes five-spot Or 8 methods.
9. robot pose measurement method according to claim 2, which is characterized in that the pre-set navigational equipment includes height Precision navigation equipment, it includes vehicle-mounted monocular camera that described image, which acquires equipment,.
10. a kind of robot pose measurement device characterized by comprising
Vector obtains module, for obtaining the corresponding first eigenvector of fisrt feature point cluster from present image;
Vector contrast module obtains for comparing the first eigenvector and at least one default feature vector Two feature vectors;
Characteristic extracting module, for extracting institute from the corresponding default characteristic point cluster of each default feature vector and preset posture State the second feature point cluster and label posture of second feature vector;
Characteristic matching module obtains characteristic point for matching the fisrt feature point cluster and second feature point cluster It is right;
Posture obtains module, for solving a basis matrix to the label posture using the characteristic point, to obtain machine The current pose of people.
11. a kind of electronic equipment characterized by comprising
Processor;And
Memory, for storing the executable instruction of the processor;
Wherein, the processor is configured to come described in perform claim requirement 1-9 any one via the execution executable instruction Robot pose measurement method.
12. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program Robot pose measurement method described in any one of claim 1-9 is realized when being executed by processor.
CN201810494804.XA 2018-05-22 2018-05-22 Robot pose measurement method and device, electronic equipment, storage medium Pending CN110516516A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810494804.XA CN110516516A (en) 2018-05-22 2018-05-22 Robot pose measurement method and device, electronic equipment, storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810494804.XA CN110516516A (en) 2018-05-22 2018-05-22 Robot pose measurement method and device, electronic equipment, storage medium

Publications (1)

Publication Number Publication Date
CN110516516A true CN110516516A (en) 2019-11-29

Family

ID=68621807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810494804.XA Pending CN110516516A (en) 2018-05-22 2018-05-22 Robot pose measurement method and device, electronic equipment, storage medium

Country Status (1)

Country Link
CN (1) CN110516516A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809118A (en) * 2016-03-03 2016-07-27 重庆中科云丛科技有限公司 Three-dimensional object identifying method and apparatus
CN106295616A (en) * 2016-08-24 2017-01-04 张斌 Exercise data analyses and comparison method and device
CN106326867A (en) * 2016-08-26 2017-01-11 维沃移动通信有限公司 Face recognition method and mobile terminal
CN107493371A (en) * 2016-06-13 2017-12-19 中兴通讯股份有限公司 A kind of recognition methods, device and the terminal of the motion feature of terminal
CN107833249A (en) * 2017-09-29 2018-03-23 南京航空航天大学 A kind of carrier-borne aircraft landing mission attitude prediction method of view-based access control model guiding

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809118A (en) * 2016-03-03 2016-07-27 重庆中科云丛科技有限公司 Three-dimensional object identifying method and apparatus
CN107493371A (en) * 2016-06-13 2017-12-19 中兴通讯股份有限公司 A kind of recognition methods, device and the terminal of the motion feature of terminal
CN106295616A (en) * 2016-08-24 2017-01-04 张斌 Exercise data analyses and comparison method and device
CN106326867A (en) * 2016-08-26 2017-01-11 维沃移动通信有限公司 Face recognition method and mobile terminal
CN107833249A (en) * 2017-09-29 2018-03-23 南京航空航天大学 A kind of carrier-borne aircraft landing mission attitude prediction method of view-based access control model guiding

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
阚江明等: "《基于计算机视觉的活立木三维重建方法》", 30 November 2011 *
陈建军: "飞艇影响序列的三维点云重建", 《中国优秀硕士学位论文全文数据库基础科学辑》 *
陈明芽: "自然视觉路标辅助的机器人定位技术研究", 《中国优秀硕士论文全文数据库信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
US11055866B2 (en) System and method for disparity estimation using cameras with different fields of view
US11176694B2 (en) Method and apparatus for active depth sensing and calibration method thereof
CN109947886B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110866953B (en) Map construction method and device, and positioning method and device
CN104885098B (en) Mobile device based text detection and tracking
CN110059652B (en) Face image processing method, device and storage medium
US20170228585A1 (en) Face recognition system and face recognition method
CN111612852B (en) Method and apparatus for verifying camera parameters
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
US11423510B2 (en) System and method for providing dolly zoom view synthesis
US20240029297A1 (en) Visual positioning method, storage medium and electronic device
CN112907620A (en) Camera pose estimation method and device, readable storage medium and electronic equipment
CN113256718B (en) Positioning method and device, equipment and storage medium
WO2019127306A1 (en) Template-based image acquisition using a robot
CN113076891B (en) Human body posture prediction method and system based on improved high-resolution network
CN113361365A (en) Positioning method and device, equipment and storage medium
US20220414919A1 (en) Method and apparatus for depth-aided visual inertial odometry
CN110516516A (en) Robot pose measurement method and device, electronic equipment, storage medium
CN111062479B (en) Neural network-based rapid model upgrading method and device
Gedik et al. Fusing 2D and 3D clues for 3D tracking using visual and range data
CN115880555B (en) Target detection method, model training method, device, equipment and medium
KR102429297B1 (en) Method and system for image colorization based on deep-learning
CN109325962B (en) Information processing method, device, equipment and computer readable storage medium
Tang et al. Local Semantic Feature-Based Hierarchical Matching Algorithm for Wide-Area Remote Sensing Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191129