CN107481287A - It is a kind of based on the object positioning and orientation method and system identified more - Google Patents
It is a kind of based on the object positioning and orientation method and system identified more Download PDFInfo
- Publication number
- CN107481287A CN107481287A CN201710571344.1A CN201710571344A CN107481287A CN 107481287 A CN107481287 A CN 107481287A CN 201710571344 A CN201710571344 A CN 201710571344A CN 107481287 A CN107481287 A CN 107481287A
- Authority
- CN
- China
- Prior art keywords
- mark
- determined
- target identification
- image
- profile
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Abstract
Comprised the following steps the present invention relates to a kind of based on the object positioning and orientation method and system identified more, this method:Multiple recognizable marks different from each other are set in the space of object to be positioned;The image that at least one mark is included by the camera device shooting being arranged on object to be positioned;Whole marks in image are identified, handled and screened, obtain unique target identification, and determine positional information of the target identification in image coordinate system;Target identification is decoded, obtains the numbering of target identification;Mark coordinate of the target identification in world coordinate system is determined according to the numbering of target identification;The pose of object to be positioned is determined according to positional information, mark coordinate.It is provided by the invention a kind of based on the object positioning and orientation method and system identified more, a wide range of positioning and orientation to object to be positioned is realized, practicality is stronger, and cost is relatively low.
Description
Technical field
The present invention relates to computer vision/robot navigation's positioning field, more particularly to it is a kind of based on the object identified more
Positioning and orientation method and system.
Background technology
With developing rapidly for location technology and LBS (Location-Based Service, based on location-based service), the mankind
A new height is reached to the demand of positioning, the service of position is widely used in navigating, follow the trail of and conduct a sightseeing tour etc..
LBS key technology is navigator fix, including outdoor positioning, indoor positioning and vision positioning etc..
Wherein, vision positioning is to carry out pose survey using the artificial setting identification of Computer Vision Recognition and physical feature mark
The method of amount, it is widely used in robot system, moving body control system and precision detecting system.At present, vision positioning technology
Have the disadvantages that:Positioning and orientation scope is small, and precision is not high enough, and robot or aircraft can not be realized in motion process
Accurate positioning and orientation, when being identified for artificial setting identification and physical feature mark, recognition accuracy is low.
The content of the invention
The technical problems to be solved by the invention are in view of the shortcomings of the prior art, there is provided a kind of based on the object identified more
Positioning and orientation method and system.
The technical scheme that the present invention solves above-mentioned technical problem is as follows:
It is a kind of based on the object positioning and orientation method identified more, comprise the following steps:
Step 1, multiple recognizable marks different from each other are set in the space of object to be positioned;
Step 2, at least one mark is included by the camera device shooting being arranged on the object to be positioned
Image;
Step 3, whole marks in described image are identified, handled and screened, obtain unique target identification, and
Determine positional information of the target identification in image coordinate system;
Step 4, the target identification is decoded, obtains the numbering of the target identification;
Step 5, mark coordinate of the target identification in world coordinate system is determined according to the numbering of the target identification;
Step 6, the pose of the object to be positioned is determined according to the positional information, the mark coordinate.
The beneficial effects of the invention are as follows:It is provided by the invention a kind of based on the object positioning and orientation method identified more, pass through
Multiple recognizable marks different from each other are set in the space of object to be positioned, and by object identification to be positioned these
These marks are handled, determine the pose of object to be positioned, realized a wide range of positioning to object to be positioned and determine by mark
Appearance, practicality is stronger, and cost is relatively low.
On the basis of above-mentioned technical proposal, the present invention can also do following improvement.
Further, in step 3, specifically include:
Step 3.1, described image is identified according to the opencv recognizers to prestore, obtains the two-value of described image
Profile diagram, and determine the summit of each profile in described image;
Step 3.2, the transformation matrix obtained according to the summit is handled described image, obtains mark to be determined;
Step 3.3, the quantity of the mark to be determined is judged, when the quantity of the mark to be determined is equal to 1, by described in
Mark to be determined is used as target identification;When the quantity of the mark to be determined is more than 1, the wheel of each mark to be determined is judged
Profile surface is accumulated;
Step 3.4, when the contour area difference of each mark to be determined, by the described to be determined of contour area maximum
Mark is used as target identification;When the contour area of each mark to be determined is identical, judge in each mark to be determined
For point the distance between with principal point of described image, selection and closest described of the principal point to be determined are identified as target
Mark;
Step 3.5, apex coordinate of each summit of the target identification in described image coordinate system is determined respectively, is obtained
Positional information of the target identification in image coordinate system.
Further, in step 3.1, specifically include:
Step 3.1.1, gray processing processing and binary conversion treatment are carried out to described image successively, obtain bianry image, and it is right
The bianry image denoising;
Step 3.1.2, the profile of the bianry image is extracted according to default contours extract algorithm;
Step 3.1.3, according to default profile threshold value, remove the profile that area is less than the profile threshold value;
Step 3.1.4, polygonal approximation is carried out to the profile of reservation, obtains polygonal profile;
Step 3.1.5, judge whether the polygonal profile is convex polygon, remove not as the polygonal wheel of convex polygon
It is wide;
Step 3.1.6, extract the summit of the polygonal profile of reservation.
Further, in step 6, specifically include:
Step 6.1, according to the positional information, the mark coordinate and default pose computation, the mesh is determined
Pose of the mark mark in the world coordinate system relative to described image coordinate system;
Step 6.2, position of the described image coordinate system relative to the world coordinate system is determined according to default inversion operation
Appearance, obtain the pose of the object to be positioned.
Further, the object positioning and orientation method also includes:
Step 7, when the object translation to be positioned or rotation, step 2 is re-executed to step 6, is determined described undetermined
The pose of position object.
The above-mentioned further beneficial effect of scheme is:It is multiple different from each other by identifying when object of which movement to be positioned
Recognizable mark, even if realizing object to be positioned in translation or rotation, also timely and accurately object to be positioned can be determined
Appearance is determined in position, not only realizes and the position of three-dimensional is positioned, and can also obtain the deflection angle information realization six of three-dimensional certainly
By the pose measurement spent, practicality is improved, extends positioning and orientation scope.
The another technical solution that the present invention solves above-mentioned technical problem is as follows:
It is a kind of based on the object positioning and orientation system identified more, including:Multiple recognizable marks different from each other, and
Object to be positioned, wherein:
The mark is arranged in the space of the object to be positioned;
The object to be positioned includes:
Camera device, for shooting the image for including at least one mark;
Processor, for whole marks in described image to be identified, handle and screened, obtain unique target mark
Know, and determine positional information of the target identification in image coordinate system, and the target identification is decoded, obtain institute
The numbering of target identification is stated, and mark of the target identification in world coordinate system is determined according to the numbering of the target identification
Coordinate, and according to the positional information, the pose for identifying coordinate and determining the object to be positioned.
Further, the processor specifically includes:
Image identification unit, for described image to be identified according to the opencv recognizers to prestore, obtain described
The two-value profile diagram of image, and determine the summit of each profile in described image;
Graphics processing unit, the transformation matrix for being obtained according to the summit are handled described image, treated
It is determined that mark;
Judging unit, for judging the quantity of the mark to be determined, when the quantity of the mark to be determined is equal to 1,
Using the mark to be determined as target identification;When the quantity of the mark to be determined is more than 1, each mark to be determined is judged
The contour area of knowledge;
When the contour area difference of each mark to be determined, using the maximum mark to be determined of contour area as
Target identification;When the contour area of each mark to be determined is identical, judge the midpoint of each mark to be determined with it is described
The distance between principal point of image, closest described of selection and the principal point to be determined are identified as target identification;
Coordinate determining unit, for determining summit of each summit of the target identification in described image coordinate system respectively
Coordinate, obtain positional information of the target identification in image coordinate system.
Further, described image recognition unit is specifically used for carrying out gray processing processing and binaryzation to described image successively
Processing, obtains bianry image, and extracts the binary map to the bianry image denoising, and according to default contours extract algorithm
The profile of picture, and the profile of the profile threshold value is less than according to default profile threshold value, removal area, and to the wheel of reservation
Exterior feature carries out polygonal approximation, obtains polygonal profile, and judges whether the polygonal profile is convex polygon, removes not to be convex
The polygonal profile of polygon, and extract the summit of the polygonal profile of reservation.
Further, the processor also includes:
Computing unit, for according to the positional information, the mark coordinate and default pose computation, determining institute
Pose of the target identification in the world coordinate system relative to described image coordinate system is stated, and is determined according to default inversion operation
Described image coordinate system obtains the pose of the object to be positioned relative to the pose of the world coordinate system.
Further, the processor is additionally operable to, when the object translation to be positioned or rotation, pass through the processor
Redefine the pose of the object to be positioned.
The advantages of aspect that the present invention adds, will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by present invention practice.
Brief description of the drawings
Fig. 1 is a kind of schematic flow sheet based on the object positioning and orientation method identified more provided in an embodiment of the present invention;
Fig. 2 is a kind of flow chart based on the object positioning and orientation method identified more that another embodiment of the present invention provides;
Fig. 3 is a kind of space structure based on the object positioning and orientation system identified more that another embodiment of the present invention provides
Schematic diagram;
Fig. 4 is a kind of thing to be positioned based on the object positioning and orientation system identified more that another embodiment of the present invention provides
The structural representation of body.
Embodiment
The principle and feature of the present invention are described below in conjunction with accompanying drawing, the given examples are served only to explain the present invention, and
It is non-to be used to limit the scope of the present invention.
As shown in figure 1, it is a kind of flow based on the object positioning and orientation method identified more provided in an embodiment of the present invention
Schematic diagram, this method comprise the following steps:
S1, sets multiple recognizable marks different from each other in the space of object to be positioned, mark be in order to
For object identification to be positioned, for these marks all with specifically recognizable feature, recognizable feature can be that textured pattern is special
Sign or the pattern characteristics being made up of characteristic point, the pattern characteristics of these marks are all different each other, to distinguish each mark, this
A little pattern characteristics can be converted into specific digital number, and object to be positioned distinguishes these marks by digital number.
The recognizable feature of each mark can include or can be converted into one group of characteristic point of at least four digital number
Collection, this group of feature point set should conveniently determine positional information of the mark with respect to world coordinate system, for example, mark can be to adopt
There are a circle dark border, the pattern that inside is combined for black and white grid, each identification pattern internal black with the outward flange of inkjet printing
The mode of white square combination is can obtain digital number corresponding to mark by Hamming code decoding, for example, inside can be selected
For 5*5 grid, postrotational repetition texture is excluded, corresponding 1204 numberings can be combined in it.
Preferably, mark directly can also be chosen on the Reference surface on object of which movement space to be positioned periphery.
These marks also need to meet following condition, ensure diverse location of the object to be positioned in space, can
Obtain at least one complete mark.
A kind of method for arranging mark is given below, selectes the origin of spatial reference point, i.e. world coordinate system, mark is solid
Due to the space for the three-dimensional position for being convenient for measuring its four rectangle summits, it is possible to use surface plate or support carry out constant mark, this
The density identified a bit with least one mark in the shooting visual field of the camera device of object to be positioned movement it is ensured that occur.
S2, the image that at least one mark is included by the camera device shooting being arranged on object to be positioned, here
Camera device refers to vision sensor, optical camera can be generally used, with default frequency real-time image acquisition data.
It should be noted that, it is necessary to be demarcated to camera device, for example, using " Zhang Zhengyou demarcation " before shooting
Gridiron pattern scaling method demarcate camera device, obtain focal length fx、fy, offset Cx、CyCamera device internal reference and distortion join
Number, the imaging mathematical modeling of camera device is determined, then preserve the internal reference of camera device and distortion parameter is resolved for pose and called.
S3, whole marks in image are identified, handled and screened, obtain unique target identification, and determine mesh
Positional information of the mark mark in image coordinate system.
It should be noted that may include multiple marks or the image of other chaff interferences in the image photographed, this is
Need to handle image, therefrom select a satisfactory mark, for example, can be by opencv recognizers from figure
All possible mark is identified as in, then demarcation is measured to mark, for example, therefrom being selected according to Hamming distance unique
Target identification.
S4, target identification is decoded, obtain the numbering of target identification.
For example, when being identified as the pattern that inside is made up of 5*5 black and white grid, the side that Hamming distance decodes can be passed through
The numbering that formula is identified.
S5, mark coordinate of the target identification in world coordinate system is determined according to the numbering of target identification, each mark exists
Coordinate in world coordinate system can be set in advance, be pre-stored in object to be positioned, when object identification to be positioned is marked to some
After knowledge, it is possible to obtain the coordinate of the mark.
S6, the pose of object to be positioned is determined according to positional information, mark coordinate.
It is for instance possible to use one kind in the pose computation that opencv is provided obtains re-projection error most by iteration
Small solution solves PNP problems (perspective-n-point, multiple spot perspective) as the algorithm of the optimal solution of problem, according to
The pose of world coordinate system relative camera coordinate system where the world coordinates of the mark of identification solves mark, then by inverting
Computing solves pose of the camera coordinate system under world coordinate system, by this pose data and camera device to object to be positioned
Pose make the difference, that is, calculate the pose of object to be positioned.
The present embodiment provides a kind of based on the object positioning and orientation method identified more, passes through the motion in object to be positioned
Multiple recognizable marks different from each other are set in space, and by these marks of object identification to be positioned, to these identify into
Row processing, determines the pose of object to be positioned, realizes a wide range of positioning and orientation to object to be positioned, and practicality is stronger, and
Cost is relatively low.
As shown in Fig. 2 provided for another embodiment of the present invention a kind of based on the object positioning and orientation method identified more
Flow chart, this method is described in detail below, this method comprises the following steps:
S1, sets multiple recognizable marks different from each other in the space of object to be positioned, mark be in order to
For object identification to be positioned, for these marks all with specifically recognizable feature, recognizable feature can be that textured pattern is special
Sign or the pattern characteristics being made up of characteristic point, the pattern characteristics of these marks are all different each other, to distinguish each mark, this
A little pattern characteristics can be converted into specific digital number, and object to be positioned distinguishes these marks by digital number.
The recognizable feature of each mark can include or can be converted into one group of characteristic point of at least four digital number
Collection, this group of feature point set should conveniently determine positional information of the mark with respect to world coordinate system, for example, mark can be to adopt
There are a circle dark border, the pattern that inside is combined for black and white grid, each identification pattern internal black with the outward flange of inkjet printing
The mode of white square combination is can obtain digital number corresponding to mark by Hamming code decoding, for example, inside can be selected
For 5*5 grid, postrotational repetition texture is excluded, corresponding 1204 numberings can be combined in it.
Preferably, mark directly can also be chosen on the Reference surface on object of which movement space to be positioned periphery.
These marks also need to meet following condition, ensure diverse location of the object to be positioned in space, can
Obtain at least one complete mark.
A kind of method for arranging mark is given below, selectes the origin of spatial reference point, i.e. world coordinate system, mark is solid
Due to the space for the three-dimensional position for being convenient for measuring its four rectangle summits, it is possible to use surface plate or support carry out constant mark, this
The density identified a bit with least one mark in the shooting visual field of the camera device of object to be positioned movement it is ensured that occur.
S2, the image that at least one mark is included by the camera device shooting being arranged on object to be positioned, here
Camera device refers to vision sensor, optical camera can be generally used, with default frequency real-time image acquisition data.
It should be noted that, it is necessary to be demarcated to camera device, for example, using " Zhang Zhengyou demarcation " before shooting
Gridiron pattern scaling method demarcate camera device, obtain focal length fx、fy, offset Cx、CyCamera device internal reference and distortion join
Number, the imaging mathematical modeling of camera device is determined, then preserve the internal reference of camera device and distortion parameter is resolved for pose and called.
S3, whole marks in image are identified, handled and screened, obtain unique target identification, and determine mesh
Positional information of the mark mark in image coordinate system.
It should be noted that may include multiple marks or the image of other chaff interferences in the image photographed, this is
Need to handle image, therefrom select a satisfactory mark, for example, can be by opencv recognizers from figure
All possible mark is identified as in, then demarcation is measured to mark, for example, therefrom being selected according to Hamming distance unique
Target identification.
Specifically, step S3 can be refined as following steps.
S31, image is identified according to the opencv recognizers to prestore, obtains the two-value profile diagram of image, and really
Determine the summit of each profile in image, this process is described in detail below.
After image is got, successively to image carry out gray processing processing, it will be understood that if camera device using
The high-speed camera of gray level image is directly gathered, then need not carry out gray processing processing to image.
Then obtained gray level image is converted into bianry image, binary conversion treatment can use adaptive threshold fuzziness to calculate
Method, adaptive threshold fuzziness method are adapted to certain illumination variation.
Morphology opening operation is carried out to obtained bianry image again, denoising is carried out to bianry image, further according to default wheel
The profile of bianry image after wide extraction algorithm extraction denoising.
Default profile threshold value again, the profile that area is less than profile threshold value is removed, for example, it is assumed that the area of mark is minimum
S, then profile threshold value can is arranged to s, it will be understood that can also be judged here by length of side to profile etc., gone
Except less profile.
Polygonal approximation is carried out to the profile of reservation, obtains polygonal profile, and judge whether polygonal profile is convex more
Side shape, remove not as the polygonal profile of convex polygon.
Further according to default length of side threshold value, the length of side of the convex polygon profile to obtaining judges, removal is unsatisfactory for side
The profile of long threshold condition, for example, it is assumed that a length of b of minimum edge of mark, then length of side threshold value can is arranged to b, passes through this
One step can further remove the profile for the condition of being unsatisfactory for.
Finally, the summit of the convex polygon profile of reservation is extracted.
S32, the transformation matrix obtained according to summit are handled image, obtain mark to be determined, it is assumed that mark is interior
Portion includes the square mark of 5*5 black and white grid, is specifically described below.
Transformation matrix is obtained according to the summit of convex polygon profile first, to obtain the front view of image, by transformation matrix
Perspective transform is carried out to image, is transformed to 70*70 rectangle mark figure, otsu algorithm Threshold segmentations then are carried out to perspective view,
Whether it, which has a circle dark border, is judged using 10*10 grids traversal fringe region to two-value contour images after segmentation again, if
Have, then retain the profile, if being removed without if.The internal black white square of mark is decoded again, dissociated by Hamming distance
Code obtains identifier number information.Obtain mark to be determined.
S33, the quantity of mark to be determined is judged, when the quantity of mark to be determined is equal to 1, using mark to be determined as mesh
Mark mark;When the quantity of mark to be determined is more than 1, the contour area of each mark to be determined is judged.
S34, when the contour area difference of each mark to be determined, using the maximum mark to be determined of contour area as target
Mark;Nearer apart from object to be positioned, the area of mark is also bigger, can so determine that mark is used as target identification recently.
When the contour area of each mark to be determined is identical, judge between the midpoint of each mark to be determined and the principal point of image away from
From selection is identified as target identification with closest to be determined of principal point.
S35, apex coordinate of each summit of target identification in image coordinate system is determined respectively, target identification is obtained and is scheming
As the positional information in coordinate system.
S4, target identification is decoded, obtain the numbering of target identification.
S5, mark coordinate of the target identification in world coordinate system is determined according to the numbering of target identification, each mark exists
Coordinate in world coordinate system can be set in advance, be pre-stored in object to be positioned, when object identification to be positioned is marked to some
After knowledge, it is possible to obtain the coordinate of the mark.
S6, the pose of object to be positioned is determined according to positional information, mark coordinate.
Specifically, according to positional information, mark coordinate and default pose computation, determine that target identification is sat in the world
Mark is the pose relative to image coordinate system.
Pose of the image coordinate system relative to world coordinate system is determined according to default inversion operation, obtains object to be positioned
Pose.
It is for instance possible to use one kind in the pose computation that opencv is provided obtains re-projection error most by iteration
Small solution solves PNP problems as the algorithm of the optimal solution of problem, determines target identification in world coordinate system relative to image
The pose of coordinate system.
When object translation to be positioned or rotation, step S2 to step S6 is re-executed, determines the pose of object to be positioned.
Only by single mark, it can only determine that this mark appears in the pose of object to be positioned during camera coverage, when
Object to be positioned is mobile or slewing area is larger, can be by identifying multiple marks in the visual field when making the single mark deviation visual field
In another mark determine object pose, and then pass through multiple marks and realize extension positioning and orientation scopes.
As shown in figure 3, provided for another embodiment of the present invention a kind of based on the object positioning and orientation system identified more
Space structure schematic diagram, including:Multiple recognizable marks 10 different from each other, and object to be positioned 20, wherein:
Mark 10 is arranged in the space of object 20 to be positioned.
Object 20 to be positioned can be movable robot vehicle, robot or aircraft etc., specifically, can include:
Camera device 21, the image of at least one mark 10 is included for shooting, optical camera can be used, with certain
Frequency real-time image acquisition data.
The connected mode of camera device 21 and object 20 to be positioned can be:Thing to be positioned is fixed in the bottom of expansion link
Body 20, top connection ball-type cloudling platform, head connection camera device 21, by expansion link, the height of camera device 21 can be adjusted
Degree, by the small head of ball-type, the anglec of rotation of camera device 21 can be adjusted, by adjusting the height of camera device 21 and regarding
Angle, it can be ensured that at least one mark occurs in the visual field of camera device 21.
Processor 22, for whole marks 10 in image to be identified, handle and screened, obtain unique target mark
Know, and determine positional information of the target identification in image coordinate system, and target identification is decoded, obtain target identification
Numbering, and mark coordinate of the target identification in world coordinate system is determined according to the numbering of target identification, and according to positional information,
Mark coordinate determines the pose of object 20 to be positioned.
As can be seen that the processor 22 in object 20 to be positioned has carried out the processing of complexity to image from the present embodiment,
Object 20 to be positioned will be described in detail by another embodiment below.
As shown in figure 4, provided for another embodiment of the present invention a kind of based on the object positioning and orientation system identified more
The structural representation of object 20 to be positioned, the object 20 to be positioned include:Camera device 21 and processor 22, processor 22 are specific
Including:
Image identification unit 221, for image to be identified according to the opencv recognizers to prestore, obtain image
Two-value profile diagram, and determine the summit of each profile in image.
Image identification unit 221 is specifically used for carrying out gray processing processing and binary conversion treatment to image successively, obtains two-value
Image, and to bianry image denoising, and according to the profile of default contours extract algorithm extraction bianry image, and according to default
Profile threshold value, the profile that area is less than profile threshold value is removed, and polygonal approximation is carried out to the profile of reservation, obtain polygonal wheel
Exterior feature, and judge whether polygonal profile is convex polygon, removal is not the polygonal profile of convex polygon, and extracts the more of reservation
The summit of side shape profile.
Graphics processing unit 222, the transformation matrix for being obtained according to summit are handled image, obtain mark to be determined
Know.
Judging unit 223, for judging the quantity of mark to be determined, when the quantity of mark to be determined is equal to 1, it will treat really
Calibration, which is known, is used as target identification.When the quantity of mark to be determined is more than 1, the contour area of each mark to be determined is judged.
When the contour area difference of each mark to be determined, using the maximum mark to be determined of contour area as target mark
Know.When the contour area of each mark to be determined is identical, judge between the midpoint of each mark to be determined and the principal point of image
Distance, selection are identified as target identification with closest to be determined of principal point.
Coordinate determining unit 224, for determining apex coordinate of each summit of target identification in image coordinate system respectively,
Obtain positional information of the target identification in image coordinate system.
Decoding unit 225, is decoded to target identification, obtains the numbering of target identification, and according to the volume of target identification
Number determine mark coordinate of the target identification in world coordinate system.
Computing unit 226, for according to positional information, mark coordinate and default pose computation, determining target mark
Know pose in world coordinate system relative to image coordinate system, and according to default inversion operation determine image coordinate system relative to
The pose of world coordinate system, obtain the pose of object 20 to be positioned.
Preferably, processor 22 is additionally operable to, when the translation of object 20 to be positioned or rotation, redefine by processor 22
The pose of object 20 to be positioned.
Reader should be understood that in the description of this specification, reference term " one embodiment ", " some embodiments ", " show
The description of example ", " specific example " or " some examples " etc. mean to combine the specific features of the embodiment or example description, structure,
Material or feature are contained at least one embodiment or example of the present invention.In this manual, above-mentioned term is shown
The statement of meaning property need not be directed to identical embodiment or example.Moreover, specific features, structure, material or the feature of description
It can be combined in an appropriate manner in any one or more embodiments or example.In addition, in the case of not conflicting, this
The technical staff in field can be by the different embodiments or example described in this specification and the spy of different embodiments or example
Sign is combined and combined.
It is apparent to those skilled in the art that for convenience of description and succinctly, the dress of foregoing description
The specific work process with unit is put, the corresponding process in preceding method embodiment is may be referred to, will not be repeated here.
In several embodiments provided herein, it should be understood that disclosed apparatus and method, it can be passed through
Its mode is realized.For example, device embodiment described above is only schematical, for example, the division of unit, is only
A kind of division of logic function, can there is an other dividing mode when actually realizing, for example, multiple units or component can combine or
Person is desirably integrated into another system, or some features can be ignored, or does not perform.
The unit illustrated as separating component can be or may not be physically separate, be shown as unit
Part can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple networks
On unit.Some or all of unit therein can be selected to realize the mesh of scheme of the embodiment of the present invention according to the actual needs
's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also
It is that unit is individually physically present or two or more units are integrated in a unit.It is above-mentioned integrated
Unit can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
If integrated unit is realized in the form of SFU software functional unit and is used as independent production marketing or in use, can
To be stored in a computer read/write memory medium.Based on such understanding, technical scheme substantially or
Say that the part to be contributed to prior art, or all or part of the technical scheme can be embodied in the form of software product
Out, the computer software product is stored in a storage medium, including some instructions are causing a computer equipment
(can be personal computer, server, or network equipment etc.) performs all or part of each embodiment method of the present invention
Step.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-OnlyMemory), deposit at random
Access to memory (RAM, RandomAccessMemory), magnetic disc or CD etc. are various can be with the medium of store program codes.
More than, it is only embodiment of the invention, but protection scope of the present invention is not limited thereto, and it is any to be familiar with
Those skilled in the art the invention discloses technical scope in, various equivalent modifications or substitutions can be readily occurred in,
These modifications or substitutions should be all included within the scope of the present invention.Therefore, protection scope of the present invention should be wanted with right
The protection domain asked is defined.
Claims (10)
- It is 1. a kind of based on the object positioning and orientation method identified more, it is characterised in that to comprise the following steps:Step 1, multiple recognizable marks different from each other are set in the space of object to be positioned;Step 2, the image at least one mark being included by the camera device shooting being arranged on the object to be positioned;Step 3, whole marks in described image are identified, handled and screened, obtain unique target identification, and determine Positional information of the target identification in image coordinate system;Step 4, the target identification is decoded, obtains the numbering of the target identification;Step 5, mark coordinate of the target identification in world coordinate system is determined according to the numbering of the target identification;Step 6, the pose of the object to be positioned is determined according to the positional information, the mark coordinate.
- 2. object positioning and orientation method according to claim 1, it is characterised in that in step 3, specifically include:Step 3.1, described image is identified according to the opencv recognizers to prestore, obtains the two-value profile of described image Figure, and determine the summit of each profile in described image;Step 3.2, the transformation matrix obtained according to the summit is handled described image, obtains mark to be determined;Step 3.3, judge the quantity of the mark to be determined, when the quantity of the mark to be determined is equal to 1, treated described really Calibration, which is known, is used as target identification;When the quantity of the mark to be determined is more than 1, the contoured surface of each mark to be determined is judged Product;Step 3.4, when the contour area difference of each mark to be determined, by the mark to be determined that contour area is maximum As target identification;When the contour area of each mark to be determined is identical, judge the midpoint of each mark to be determined with The distance between principal point of described image, closest described of selection and the principal point to be determined are identified as target mark Know;Step 3.5, apex coordinate of each summit of the target identification in described image coordinate system is determined respectively, is obtained described Positional information of the target identification in image coordinate system.
- 3. object positioning and orientation method according to claim 2, it is characterised in that in step 3.1, specifically include:Step 3.1.1, gray processing processing and binary conversion treatment are carried out to described image successively, obtain bianry image, and to described Bianry image denoising;Step 3.1.2, the profile of the bianry image is extracted according to default contours extract algorithm;Step 3.1.3, according to default profile threshold value, remove the profile that area is less than the profile threshold value;Step 3.1.4, polygonal approximation is carried out to the profile of reservation, obtains polygonal profile;Step 3.1.5, judge whether the polygonal profile is convex polygon, remove not as the polygonal profile of convex polygon;Step 3.1.6, extract the summit of the polygonal profile of reservation.
- 4. object positioning and orientation method according to any one of claim 1 to 3, it is characterised in that in step 6, specifically Including:Step 6.1, according to the positional information, the mark coordinate and default pose computation, the target mark is determined Know the pose in the world coordinate system relative to described image coordinate system;Step 6.2, pose of the described image coordinate system relative to the world coordinate system is determined according to default inversion operation, obtained To the pose of the object to be positioned.
- 5. object positioning and orientation method according to claim 4, it is characterised in that also include:Step 7, when the object translation to be positioned or rotation, step 2 is re-executed to step 6, determines the thing to be positioned The pose of body.
- It is 6. a kind of based on the object positioning and orientation system identified more, it is characterised in that including:It is multiple different from each other recognizable Mark, and object to be positioned, wherein:The mark is arranged in the space of the object to be positioned;The object to be positioned includes:Camera device, for shooting the image for including at least one mark;Processor, for whole marks in described image to be identified, handle and screened, unique target identification is obtained, And positional information of the target identification in image coordinate system is determined, and the target identification is decoded, obtain described The numbering of target identification, and determine that mark of the target identification in world coordinate system is sat according to the numbering of the target identification Mark, and according to the positional information, the pose for identifying coordinate and determining the object to be positioned.
- 7. object positioning and orientation system according to claim 6, it is characterised in that the processor specifically includes:Image identification unit, for described image to be identified according to the opencv recognizers to prestore, obtain described image Two-value profile diagram, and determine the summit of each profile in described image;Graphics processing unit, the transformation matrix for being obtained according to the summit are handled described image, are obtained to be determined Mark;Judging unit, for judging the quantity of the mark to be determined, when the quantity of the mark to be determined is equal to 1, by institute State mark to be determined and be used as target identification;When the quantity of the mark to be determined is more than 1, each mark to be determined is judged Contour area;When the contour area difference of each mark to be determined, using the maximum mark to be determined of contour area as target Mark;When the contour area of each mark to be determined is identical, midpoint and the described image of each mark to be determined are judged The distance between principal point, closest described of selection and the principal point to be determined be identified as target identification;Coordinate determining unit, for determining that summit of each summit of the target identification in described image coordinate system is sat respectively Mark, obtains positional information of the target identification in image coordinate system.
- 8. object positioning and orientation system according to claim 7, it is characterised in that described image recognition unit is specifically used for Gray processing processing and binary conversion treatment are carried out to described image successively, obtain bianry image, and to the bianry image denoising, and The profile of the bianry image is extracted according to default contours extract algorithm, and according to default profile threshold value, it is small to remove area Polygonal approximation is carried out in the profile of the profile threshold value, and to the profile of reservation, obtains polygonal profile, and judge institute State whether polygonal profile is convex polygon, removal is not the polygonal profile of convex polygon, and extracts the described polygon of reservation The summit of shape profile.
- 9. the object positioning and orientation system according to any one of claim 6 to 8, it is characterised in that the processor is also Including:Computing unit, for according to the positional information, the mark coordinate and default pose computation, determining the mesh Pose of the mark mark in the world coordinate system relative to described image coordinate system, and according to determining default inversion operation Image coordinate system obtains the pose of the object to be positioned relative to the pose of the world coordinate system.
- 10. object positioning and orientation system according to claim 9, it is characterised in that the processor is additionally operable to when described When object translation to be positioned or rotation, the pose of the object to be positioned is redefined by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710571344.1A CN107481287A (en) | 2017-07-13 | 2017-07-13 | It is a kind of based on the object positioning and orientation method and system identified more |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710571344.1A CN107481287A (en) | 2017-07-13 | 2017-07-13 | It is a kind of based on the object positioning and orientation method and system identified more |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107481287A true CN107481287A (en) | 2017-12-15 |
Family
ID=60596668
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710571344.1A Pending CN107481287A (en) | 2017-07-13 | 2017-07-13 | It is a kind of based on the object positioning and orientation method and system identified more |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107481287A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109101916A (en) * | 2018-08-01 | 2018-12-28 | 甘肃未来云数据科技有限公司 | The acquisition methods and device of video actions based on mark band |
CN109903337A (en) * | 2019-02-28 | 2019-06-18 | 北京百度网讯科技有限公司 | Method and apparatus for determining the pose of the scraper bowl of excavator |
CN110081862A (en) * | 2019-05-07 | 2019-08-02 | 达闼科技(北京)有限公司 | A kind of localization method of object, positioning device, electronic equipment and can storage medium |
WO2019154444A3 (en) * | 2018-05-31 | 2019-10-03 | 上海快仓智能科技有限公司 | Mapping method, image acquisition and processing system, and positioning method |
CN111709999A (en) * | 2020-05-13 | 2020-09-25 | 深圳奥比中光科技有限公司 | Calibration plate, camera calibration method and device, electronic equipment and camera system |
CN111754576A (en) * | 2020-06-30 | 2020-10-09 | 广东博智林机器人有限公司 | Rack measuring system, image positioning method, electronic device and storage medium |
CN112033408A (en) * | 2020-08-27 | 2020-12-04 | 河海大学 | Paper-pasted object space positioning system and positioning method |
CN112767487A (en) * | 2021-01-27 | 2021-05-07 | 京东数科海益信息科技有限公司 | Robot positioning method, device and system |
CN113031582A (en) * | 2019-12-25 | 2021-06-25 | 北京极智嘉科技股份有限公司 | Robot, positioning method, and computer-readable storage medium |
CN113095103A (en) * | 2021-04-15 | 2021-07-09 | 京东数科海益信息科技有限公司 | Intelligent equipment positioning method, device, equipment and storage medium |
CN114332234A (en) * | 2021-10-26 | 2022-04-12 | 鹰驾科技(深圳)有限公司 | Automatic calibration method and system based on checkerboard |
WO2022078513A1 (en) * | 2020-10-16 | 2022-04-21 | 北京猎户星空科技有限公司 | Positioning method and apparatus, self-moving device, and storage medium |
CN114523471A (en) * | 2022-01-07 | 2022-05-24 | 中国人民解放军海军军医大学第一附属医院 | Error detection method based on associated identification and robot system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104331689A (en) * | 2014-11-13 | 2015-02-04 | 清华大学 | Cooperation logo and recognition method of identities and poses of a plurality of intelligent individuals |
CN104463108A (en) * | 2014-11-21 | 2015-03-25 | 山东大学 | Monocular real-time target recognition and pose measurement method |
CN106197417A (en) * | 2016-06-22 | 2016-12-07 | 平安科技(深圳)有限公司 | The indoor navigation method of handheld terminal and handheld terminal |
CN106295512A (en) * | 2016-07-27 | 2017-01-04 | 哈尔滨工业大学 | Many correction line indoor vision data base construction method based on mark and indoor orientation method |
-
2017
- 2017-07-13 CN CN201710571344.1A patent/CN107481287A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104331689A (en) * | 2014-11-13 | 2015-02-04 | 清华大学 | Cooperation logo and recognition method of identities and poses of a plurality of intelligent individuals |
CN104463108A (en) * | 2014-11-21 | 2015-03-25 | 山东大学 | Monocular real-time target recognition and pose measurement method |
CN106197417A (en) * | 2016-06-22 | 2016-12-07 | 平安科技(深圳)有限公司 | The indoor navigation method of handheld terminal and handheld terminal |
CN106295512A (en) * | 2016-07-27 | 2017-01-04 | 哈尔滨工业大学 | Many correction line indoor vision data base construction method based on mark and indoor orientation method |
Non-Patent Citations (3)
Title |
---|
XU ZHONG 等: "Design and recognition of artificial landmarks for reliable indoor self-localization of mobile robots", 《INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS》 * |
关凯: "基于标识的室内视觉定位算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
李红波 等: "沉浸式虚拟现实漫游系统中视觉定位标识布局方法", 《中国科技论文在线》 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019154444A3 (en) * | 2018-05-31 | 2019-10-03 | 上海快仓智能科技有限公司 | Mapping method, image acquisition and processing system, and positioning method |
CN109101916A (en) * | 2018-08-01 | 2018-12-28 | 甘肃未来云数据科技有限公司 | The acquisition methods and device of video actions based on mark band |
CN109101916B (en) * | 2018-08-01 | 2022-07-05 | 甘肃未来云数据科技有限公司 | Video action acquisition method and device based on identification band |
CN109903337A (en) * | 2019-02-28 | 2019-06-18 | 北京百度网讯科技有限公司 | Method and apparatus for determining the pose of the scraper bowl of excavator |
US11004235B2 (en) | 2019-02-28 | 2021-05-11 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for determining position and orientation of bucket of excavator |
CN110081862A (en) * | 2019-05-07 | 2019-08-02 | 达闼科技(北京)有限公司 | A kind of localization method of object, positioning device, electronic equipment and can storage medium |
CN110081862B (en) * | 2019-05-07 | 2021-12-24 | 达闼科技(北京)有限公司 | Object positioning method, positioning device, electronic equipment and storage medium |
CN113031582A (en) * | 2019-12-25 | 2021-06-25 | 北京极智嘉科技股份有限公司 | Robot, positioning method, and computer-readable storage medium |
CN111709999A (en) * | 2020-05-13 | 2020-09-25 | 深圳奥比中光科技有限公司 | Calibration plate, camera calibration method and device, electronic equipment and camera system |
CN111754576A (en) * | 2020-06-30 | 2020-10-09 | 广东博智林机器人有限公司 | Rack measuring system, image positioning method, electronic device and storage medium |
CN111754576B (en) * | 2020-06-30 | 2023-08-08 | 广东博智林机器人有限公司 | Frame body measurement system, image positioning method, electronic equipment and storage medium |
CN112033408A (en) * | 2020-08-27 | 2020-12-04 | 河海大学 | Paper-pasted object space positioning system and positioning method |
CN112033408B (en) * | 2020-08-27 | 2022-09-30 | 河海大学 | Paper-pasted object space positioning system and positioning method |
WO2022078513A1 (en) * | 2020-10-16 | 2022-04-21 | 北京猎户星空科技有限公司 | Positioning method and apparatus, self-moving device, and storage medium |
CN112767487A (en) * | 2021-01-27 | 2021-05-07 | 京东数科海益信息科技有限公司 | Robot positioning method, device and system |
CN112767487B (en) * | 2021-01-27 | 2024-04-05 | 京东科技信息技术有限公司 | Positioning method, device and system of robot |
CN113095103A (en) * | 2021-04-15 | 2021-07-09 | 京东数科海益信息科技有限公司 | Intelligent equipment positioning method, device, equipment and storage medium |
CN114332234A (en) * | 2021-10-26 | 2022-04-12 | 鹰驾科技(深圳)有限公司 | Automatic calibration method and system based on checkerboard |
CN114523471A (en) * | 2022-01-07 | 2022-05-24 | 中国人民解放军海军军医大学第一附属医院 | Error detection method based on associated identification and robot system |
CN114523471B (en) * | 2022-01-07 | 2023-04-25 | 中国人民解放军海军军医大学第一附属医院 | Error detection method based on association identification and robot system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107481287A (en) | It is a kind of based on the object positioning and orientation method and system identified more | |
CN106651942B (en) | Three-dimensional rotating detection and rotary shaft localization method based on characteristic point | |
CN104981105B (en) | A kind of quickly accurate detection and method for correcting error for obtaining element central and deflection angle | |
CN106340044B (en) | Join automatic calibration method and caliberating device outside video camera | |
CN107203973B (en) | Sub-pixel positioning method for center line laser of three-dimensional laser scanning system | |
CN106683070B (en) | Height measuring method and device based on depth camera | |
CN109100741A (en) | A kind of object detection method based on 3D laser radar and image data | |
JP3735344B2 (en) | Calibration apparatus, calibration method, and calibration program | |
CN109859226B (en) | Detection method of checkerboard corner sub-pixels for graph segmentation | |
CN107122737A (en) | A kind of road signs automatic detection recognition methods | |
CN108007388A (en) | A kind of turntable angle high precision online measuring method based on machine vision | |
CN103047943A (en) | Method for detecting vehicle door outer plate shape and size based on single-projection encoding structured light | |
CN111640158A (en) | End-to-end camera based on corresponding mask and laser radar external reference calibration method | |
CN110428425B (en) | Sea-land separation method of SAR image based on coastline vector data | |
CN113470090A (en) | Multi-solid-state laser radar external reference calibration method based on SIFT-SHOT characteristics | |
CN112486207A (en) | Unmanned aerial vehicle autonomous landing method based on visual identification | |
CN113324478A (en) | Center extraction method of line structured light and three-dimensional measurement method of forge piece | |
CN113096183B (en) | Barrier detection and measurement method based on laser radar and monocular camera | |
CN115330958A (en) | Real-time three-dimensional reconstruction method and device based on laser radar | |
CN112085675A (en) | Depth image denoising method, foreground segmentation method and human motion monitoring method | |
CN110763204A (en) | Planar coding target and pose measurement method thereof | |
CN109492525B (en) | Method for measuring engineering parameters of base station antenna | |
CN108510544B (en) | Light strip positioning method based on feature clustering | |
CN113221648A (en) | Fusion point cloud sequence image guideboard detection method based on mobile measurement system | |
CN108986129A (en) | Demarcate board detecting method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171215 |