CN100410622C - Information processing method and apparatus for finding position and orientation of targeted object - Google Patents

Information processing method and apparatus for finding position and orientation of targeted object Download PDF

Info

Publication number
CN100410622C
CN100410622C CNB200510069367XA CN200510069367A CN100410622C CN 100410622 C CN100410622 C CN 100410622C CN B200510069367X A CNB200510069367X A CN B200510069367XA CN 200510069367 A CN200510069367 A CN 200510069367A CN 100410622 C CN100410622 C CN 100410622C
Authority
CN
China
Prior art keywords
orientation
image
camera head
sign
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB200510069367XA
Other languages
Chinese (zh)
Other versions
CN1696606A (en
Inventor
佐藤清秀
内山晋二
远藤隆明
铃木雅博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Publication of CN1696606A publication Critical patent/CN1696606A/en
Application granted granted Critical
Publication of CN100410622C publication Critical patent/CN100410622C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

In an information processing method, an orientation sensor is mounted on a targeted object to be measured, and bird's-eye view cameras for capturing images of the targeted object are fixedly installed. From the images captured by the bird's-eye view cameras, an index detecting unit detects indices mounted on the orientation sensor. A measured orientation value from the orientation sensor is input to an orientation predicting unit, and the orientation predicting unit predicts the present orientation of the targeted object based on an azimuth-drift-error correction value. A position-orientation calculating unit uses the image coordinates of the detected indices to calculate the position of the imaging device and an update value of the azimuth-drift-error correction value, which are unknown parameters. From the obtained parameters, the position-orientation calculating unit finds and outputs the position and orientation of the targeted object.

Description

Be used to obtain the information processing method and the equipment in the position and the orientation of target object
Technical field
The present invention relates to a kind of equipment and method that is used for the position and the orientation of Measuring Object.
Background technology
In recent years, carried out a large amount of research activitiess about mixed reality, the purpose of mixed reality is to carry out seamless link between realistic space and Virtual Space.A kind ofly be used to show that the image display of mixed reality is achieved by so-called " video perspective method ", wherein, the virtual space image that generates according to the position and the orientation of the camera head of for example video camera dummy object of draftings such as computer graphical, Word message (for example with) is drawn so that with on its realistic space image by the camera head seizure of being added to, show superimposed image thus.
In addition, image display can realize that also wherein, the virtual space image of the position of person's viewpoint and orientation generation is displayed on the optical perspective display that is erected at observer's head according to the observation by so-called " optical perspective method ".
Expectation will the frontier different with the virtual reality of correlation technique, for example be used for the state in the patient body is shown to stacking pattern the surgery aid of body surface, and the mixed reality recreation of the virtual enemy's operation in one of them player who in realistic space, moves and the Virtual Space, as the application of this image display.
These are used, and common what require is the accuracy of the alignment carried out between realistic space and the Virtual Space.A lot of such trials have been carried out.Under the situation of using the video perspective method, the alignment problem in the mixed reality is equivalent to the problem that in scene (that is, in world coordinate system) obtains the position and the orientation of camera head.Similarly, under the situation of using the optical perspective method, the alignment problem in the mixed reality sensation is summed up as the position of viewpoint in scene and the problem in orientation that obtains display or observer.
Make the method that power solves preceding problem, generally the method for Cai Yonging is by discharging in scene or many signs is set, and detect the coordinate of the projected image of the sign in the image of taking by a camera head, thereby obtain position and the orientation of camera head in this scene.In addition, have and manyly attempt to want to obtain than the more stable alignment of situation of only using image information by the inertial sensor that use is installed on the camera head.More specifically, the position and the orientation of the camera head that estimates of the value of measuring according to inertial sensor are used to sign is detected.Position that estimates and orientation also are used as the position of calculating according to image and the initial value in orientation, even perhaps under the situation of not finding sign as rough position and orientation (for example, HirofumiFUJII, Masayuki KANBARA, Hidehiko IWASA, Haruo TAKEMURA, Naokazu YOKOYA, " Kakuchogenjitsu-notameno Jairosensa-woHeiyoshita Sutereokamera-niyoru Ichiawase (registration with a stereocamera by jointly using a gyro sensor for augmented reality) ", DenshiJoho Tsushin Gakkai (Institute of Electronics, Information andCommunication Engineers) Gijutsu Hokoku (Technical report) PRMU99-192 (Shingaku Giho (Technical Report of IEICE), vol.99, no.574, pp.1-8) ")
As the method that solves back one problem, the method that generally adopts be, by camera head (and inertial sensor) is erected at the target object (being observer's head or display) that will measure go up with former case in similarly mode obtain the position and the orientation of camera head, obtain the position and the orientation of target object known relative position between camera head and target object and the position relation.
Yet, in the said method of correlation technique, do not comprise at the visual point image of subjectivity under the situation of the image information that is enough to realize stable alignment, for example, when the sign in the part that is present in image partly is observed, and when only have three signs to be observed and Mark Detection in when comprising error, the accuracy of separating and the stability of acquisition may be not enough.In addition, when the quantity of the sign of observing is big unlike 2, can not be separated.For fear of these problems, a large amount of signs need be set equably in scene.This has caused such problem, and promptly relatively difficulty and realistic space image can be out of shape distinguishing mark.In addition, have a problem, promptly cover under the situation of the sign image on the subjective point of view image at the hand with the observer, it is impossible fully to align.
Summary of the invention
According to an aspect of the present invention, be used to calculate the position of object and the information processing method in orientation may further comprise the steps: from camera head input capture image, described camera head is used for catching subject image from the position, visual angle of looking down about object, the orientation values that goes out from the aspect sensor input measurement, described aspect sensor is used to measure the azimuth information about object, from the image of catching, detect be placed in object on the relevant eigenwert of image coordinate of sign, by using detected eigenwert about the sign image coordinate, acquisition is about the parameter of position angle with about the parameter of object space, described parameter is considered as being at least unknown parameter, by using the parameter that obtains, calculate the position and the orientation of object.
According to a further aspect in the invention, provide a kind of information processing method that is used to calculate the position and the orientation of camera head, described camera head is used to catch the image of scene.This information processing method comprises: first width of cloth image input step, first width of cloth image that input is caught by camera head; Second width of cloth image input step, input is by looking down second width of cloth image that the visual angle image unit is caught, and the described visual angle image unit of looking down is used for catching image from the position, visual angle of looking down of camera head; The orientation input step, from the measured orientation values of aspect sensor input, described aspect sensor is used to measure the information about the orientation of camera head; First detects step, detects the first-sign image-coordinate characteristic value about the image coordinate that is placed in the sign of first in the scene from first width of cloth image of importing first width of cloth image input step; Second detects step, detects the second-sign image-coordinate feature about the image coordinate that is placed in the sign of second in the scene from second width of cloth image of importing second width of cloth image input step; Position-orientation calculation procedure, detect the first-sign image-coordinate characteristic value that detects in the step, the second-sign image-coordinate characteristic value that in the second detection step, detects and the measurement orientation values of in the orientation input step, importing by using, calculate the position and the orientation of camera head first.
According to a further aspect in the invention, the invention provides a kind of information processing method that is used to calculate the position and the orientation of camera head, described camera head is used to catch scene image.Described information processing method comprises: first width of cloth image input step, first width of cloth image that the input camera head is caught; Second width of cloth image input step, input is by looking down second width of cloth image that the visual angle image unit is caught, the described visual angle image unit of looking down is used for the image that viewpoint position from the camera head is caught scene, the orientation input step, the orientation values that goes out from the aspect sensor input measurement, described aspect sensor is used to measure the information about the orientation of camera head, first detects step, from first width of cloth image detection first width of cloth input step, imported first-sign image-coordinate characteristic value about the image coordinate of first sign that places scene, second detects step, detection is about the second-sign image-coordinate characteristic value of the image coordinate that is placed in second on camera head sign, position and orientation calculation procedure, detect the first-sign image-coordinate that detects in the step by using first, detect second-sign image-coordinate that detects in the step and the measurement orientation values of in the orientation input step, importing second, calculate the position and the orientation of camera head.
According to another aspect of the present invention, provide a kind of messaging device that is used to calculate the position and the orientation of object.This messaging device comprises: catch the image input block, from the image of camera head input capture, described camera head is from the image that object is caught in the position, visual angle of looking down about object; Measure the orientation values input block, from the orientation values that the aspect sensor input measurement goes out, described aspect sensor is used to measure the information about objective direction; Detecting unit is from the image detection of the catching eigenwert about the image coordinate that is placed in the sign on the object; And position-orientation computing unit, by using detected eigenwert about the sign image coordinate to obtain about the parameter of position angle with about the parameter of object space, described parameter is considered as being at least unknown parameter, and by using the position and the orientation of the calculation of parameter object that obtains.
According to another aspect of the present invention, provide a kind of messaging device that is used to calculate the position and the orientation of camera head, described camera head is used to catch the image of scene.This messaging device comprises: first width of cloth image input block, first width of cloth image that input is caught by camera head; Second width of cloth image input block, input be by looking down second width of cloth image that the visual angle image unit is caught, and describedly looks down the viewpoint position that the visual angle image unit is used for from the camera head and catch image; The orientation input block is from being used to measure the orientation values about the aspect sensor input measurement of the information in the orientation of camera head; First detecting unit detects the first-sign image-coordinate characteristic value about the image coordinate that is placed in the sign of first in the scene from first width of cloth image of importing first width of cloth image input step; Second detecting unit detects the second-sign image-coordinate characteristic value about the image coordinate that is placed in the sign of second on the camera head from second width of cloth image of importing second width of cloth image input step; Position-orientation computing unit, by using, calculate the position and the orientation of camera head by the detected first-sign image of first detecting unit-coordinate characteristic value, by the detected second-sign image of second detecting unit-coordinate characteristic value with by detection side's place value of orientation input block input.
According to a further aspect in the invention, provide a kind of be used to the calculate position of the camera head that is used to catch scene image and the messaging device in orientation.This messaging device comprises: first width of cloth image input block, first width of cloth image that input is caught by camera head; Second width of cloth image input block, input is caught second width of cloth image that the visual angle image unit is caught of looking down of image by being used for viewpoint position from the camera head; The orientation input block is from being used to measure the orientation values about the aspect sensor input measurement of the information in the orientation of camera head; First detecting unit detects the first-sign image-coordinate characteristic value about the image coordinate that is placed in the sign of first in the scene from first width of cloth image of being imported by first width of cloth image input block; Second detecting unit detects the second-sign image-coordinate characteristic value about the image coordinate that is placed in the sign of second on the camera head from second width of cloth image of being imported by second width of cloth image input block; Position-orientation computing unit, by using, calculate the position and the orientation of camera head by the detected first-marking pattern of first detecting unit-coordinate characteristic value, by the detected second-sign image of second detecting unit-coordinate characteristic value with by the orientation values of the measurement of orientation input block input.
Other feature and advantage of the present invention will be from by becoming the explanation of carrying out below in conjunction with accompanying drawing obviously, and same reference symbol is represented same or similar part in whole accompanying drawing.
Description of drawings
Fig. 1 represents the block scheme according to the position-direction measuring instrument of the first embodiment of the present invention.
Fig. 2 represents the block scheme of the essential structure of computing machine.
Fig. 3 is the process flow diagram of the processing of the parameter in the position of calculating the expression target object and orientation, and this processing is performed the software program of the bearing prediction unit that makes shown in Fig. 1 and is carried out by the CPU shown in Fig. 2.
Fig. 4 represents the process flow diagram of the process of the target object shown in the calculating chart 1, and this processing is performed and makes the software program of the position shown in Fig. 1-orientation computing unit be carried out by the CPU shown in Fig. 2.
Fig. 5 is the process flow diagram of the process of the parameter in the position of calculating the target object shown in the representative graph 1 and orientation, and this processing is performed and makes the software program of the position shown in Fig. 1-orientation detection unit be carried out by the CPU shown in Fig. 2.
Fig. 6 represents the block scheme of position-direction measuring instrument according to a second embodiment of the present invention.
Fig. 7 represents the process flow diagram of the process of the camera head shown in the calculating chart 6, and the execution of this process makes the software program of the bearing prediction unit shown in Fig. 6 be carried out by the CPU shown in Fig. 2.
Fig. 8 is the process flow diagram of the process of the parameter in the position of calculating the camera head shown in the representative graph 6 and orientation, and this process is performed and makes the software program of the position shown in Fig. 6-orientation computing unit be carried out by the CPU shown in Fig. 2.
Fig. 9 represents the block scheme of the position-direction measuring instrument of first modification according to a second embodiment of the present invention.
Figure 10 is the process flow diagram of the process of the parameter of calculating the position represent camera head and orientation, and this process is performed and makes the software program of position-orientation computing unit be carried out by CPU.
Figure 11 represents the block scheme according to the position-direction measuring instrument of the 4th modification of second embodiment of the present invention.
Embodiment
To be described in detail the preferred embodiments of the present invention with reference to the accompanying drawings below.
First embodiment
According to the position-direction measuring instrument of the first embodiment of the present invention position and the orientation of the target object arbitrarily that will measure are measured.The following describes position-direction measuring instrument and position-bearing measuring method thereof according to present embodiment.
Fig. 1 represents the structure according to position-direction measuring instrument 10 of first embodiment.As shown in Figure 1, position-direction measuring instrument 10 comprises: look down visual angle photography video camera 18a, 18b, 18c and 18d, image input block 16, data storage cell 17, Mark Detection unit 11, aspect sensor 14, bearing prediction unit 15 and position-orientation computing unit 12.Position-direction measuring instrument 10 is connected on the target object 13 that will measure.
On a plurality of positions on aspect sensor 14 and/or the target object 13, the position x in the object coordinates system c PkBe known mark P k(k=1 ..., K) (back is called " look down visual angle sign " or simply is " sign ") be provided so that and looked down visual angle photography video camera 18a, 18b, 18c and 18d observation.Here the object coordinates system is by a point and three axle definition that intersect vertically on the target object 13.
Preferably, these signs are provided so that, when target object 13 is placed on each point in the measurement range in measuring position and orientation, by looking down visual angle photography video camera 18a, the sum looking down observed on the image of visual angle (physics) sign that 18b, 18c and 18d obtain is at least 2.Example shown in Fig. 1 is represented following situation: two mark P are set 1And P 2, mark P 1Be included within the visual field of looking down visual angle photography video camera 18c mark P 2Be included within the visual field of looking down visual angle photography video camera 18c and 18d.
Mark P kCan constitute by the spherical or circular mark that for example has different colours, perhaps can constitute by the unique point of the physical feature that for example has the different structure feature.If in the image of catching the coordinate of projected image can be detected and each sign can be identified mark P then kCan have any sign form.In addition, mark P kCan be provided with wittingly or can have the natural shape of formation and need not be provided with wittingly.
Look down visual angle photography video camera 18a, 18b, 18c and 18d are fixed and are placed on such position and make, when target object 13 is placed in the measurement range, look down visual angle photography video camera 18a, the image that 18b, a photography video camera among 18c and the 18d can captured target objects 13.Below, we use the photography video camera of the term expression of " looking down the visual angle photography video camera " from the 3rd people's viewpoint object observing object 13; The position of photography video camera is not limited to " looking down " position.Look down visual angle photography video camera 18a, 18b, 18c and 18d position and the orientation in world coordinate system should be stored in the data storage cell 17 in advance as given value.By looking down visual angle photography video camera 18a, 18b, the image (back is called " looking down the visual angle image ") of 18c and 18d output is imported in the image input block 16.
The image of input is transformed to numerical data by image input block 16, and is stored in the data storage cell 17.
Aspect sensor 14 is erected on the target object 13.Aspect sensor 14 is measured the value that also will measure in its current orientation and is outputed in the bearing prediction unit 15.Aspect sensor 14 is the sensor unit based on for example gyrostatic angular rate sensor, and forms by the TISS-5-40 that is for example made by the Tokimec company of Japan or by the InertiaCube2 that the InterSense company of the U.S. makes.The measurement orientation values that is obtained by the sensor is the orientation with error, and is different with the orientation of reality.Above-mentioned aspect sensor has the acceleration sensor in the gravity orientation that is used to observe the earth as its parts, and has the function of the drift error accumulation of elimination on the orientation, pitch angle.Like this, above-mentioned aspect sensor has the characteristic that on the orientation, pitch angle (angle of pitch and angle of oscillation) do not produce drift error.In other words, above-mentioned sensor have along with time integral about around the position angle of gravity axis, that is to say the drift error at angle, orientation (driftage).
Bearing prediction unit 15 receives orientation-drift-error correction value φ from data storage cell 17, by proofreading and correct from the orientation of the measurement orientation values target of prediction object 13 of aspect sensor 14 outputs, and the orientation of prediction is outputed in the data storage cell 17.
Look down the visual angle image and be input to the Mark Detection unit 11, and detect the image coordinate that indicates in the input picture from data storage cell 17.For example, when sign is made up of the mark with different colours, detect the zone corresponding to marker color from look down the visual angle image, their centre of gravity place is as the detection coordinates of sign.When sign was made up of the unique point with different structure feature, the position of sign was by the template image execution template matches based on the sign of storing in advance as Given information detects on the image of visual angle looking down.According to by location fix computing unit 12 output and be stored in the data storage cell 17 target object 13 the position calculated value and by 15 outputs of bearing prediction unit and be stored in the predicted value in the orientation of the target object 13 in the data cell 17, can reduce the hunting zone by the position of each sign in the image is predicted.Select according to this, can reduce the error-detecting and the wrong identification of required calculated amount of Mark Detection and sign.
The image coordinate of the sign that Mark Detection unit 11 will detect and identifier thereof output in the data storage cell 17.By use photography video camera identifier x (x=a, b, c, d) and identifier m (m=1 ..., M x), M wherein xBe illustrated in the quantity that each looks down detected sign in the image of visual angle, below by the Mark Detection unit 11 detected visual angle photography video camera 18a that look down, 18b, the sign P of the image that 18c and 18d catch KxmExpression.In addition, according to looking down visual angle photography video camera 18a, 18b, the identifier of 18c and 18d, detected mark P on image KxmCoordinate use u respectively a Pkam, u b Pkbm, u c Pkcm, u d PkdmExpression.M is illustrated in the sum of detected sign in each image.For example, under the situation of Fig. 1, M a=0, M b=0, M c=2, M d=1, M=3.Therefore, sign identifier k C1=1, k C2=2, k D1=2, take identifiers of looking down the visual angle photography video camera of these sign identifiers and corresponding to the image coordinate u of these identifiers c Pkc1, u c Pkc2And u d Pkd1Be output.
The bearing prediction value of target object 13, by the image coordinate u of Mark Detection unit 11 detected marks a Pkam, u b Pkbm, u c Pkcm, u d PkdmWith corresponding object coordinates (coordinate figure in the object coordinates system) x C Pkam, x C Pkbm, x C Pkcm, x C PkdmThe data set of forming is imported into position-orientation computing unit 12 from data storage cell 17.Position-orientation computing unit 12 is according to the position and the orientation of above-mentioned information calculations target object 13, and by the interface (not shown) position and the orientation that calculates outputed to the outside.In addition, position-orientation computing unit 12 outputs to the position of the target object 13 that calculates in the data storage cell 17, and uses the updating value of the position angle-drift-error correction value of the aspect sensor 14 that produces in the process in position of calculating target object 13 and orientation that the position angle-drift-error correction value that is stored in the data storage cell 17 is upgraded.
Data storage cell 17 store directions angle-drift-error correction value, from the bearing prediction values of the image of image input block 16 input, 15 inputs from the bearing prediction unit, from the position calculation value of location fix computing unit 12 inputs, from the Mark Detection unit image coordinate of the signs of 11 inputs and identifier and the data of the object coordinates (coordinate figure the object coordinates system) of sign for example, with look down visual angle photography video camera 18a as given value, 18b, the photography video camera parameter of 18c and 18d.If desired, then the data of 17 pairs of storages of data storage cell are carried out the input and output processing.
Image input block 16 shown in Fig. 1, data storage cell 17, Mark Detection unit 11, bearing prediction unit 15 and position-orientation computing unit 12 can be considered as separate equipment.In addition, the function of the image input block 16 shown in Fig. 1, data storage cell 17, Mark Detection unit 11, bearing prediction unit 15 and position-orientation computing unit 12 can be achieved by software being installed in one or more computer and allowing the CPU (central processing unit) (CPU) of every computer that installed software is carried out.In first embodiment, each in image input block 16 data storage cells 17, Mark Detection unit 11, bearing prediction unit 15 and the location fix computing unit 12 shown in Fig. 1 is regarded as the software by the execution of separate unit computer.
Fig. 2 represents to carry out the sill block diagram as the computing machine of the image input block 16 shown in Fig. 1 of software, data storage cell 17, Mark Detection unit 11, bearing prediction unit 15 and location fix computing unit 12.
CPU1001 uses the program and the data that are stored in random-access memory (ram) 1002 and the ROM (read-only memory) (ROM) 1003 that whole computing machine controlled.By the execution of control corresponding to the software of image input block 16, Mark Detection unit 11, bearing prediction unit 15 and location fix computing unit 12, CPU1001 realizes the function of each unit.
RAM1002 comprises and is used for temporarily storing from External memory equipment 1007 or the program of storage media drive 1008 loadings and the zone of data, comprises that also CPU1001 carries out the perform region of various processing needs.The function of data storage cell 17 is realized by RAM1002.
Usually, ROM1003 is stored as computing machine and program stored and data are set.Keyboard 1004 and 1005 person of being operated being used for of mouse are imported the various CPU1001 of being indicated to.
Display unit 1006 is made up of cathode-ray tube (CRT), LCDs or similar devices, can show for the position of target object 13 and measurement of bearing and for example message that shows etc.
External memory equipment 1007 is as the high capacity information storing device, and storage operating system, software, program etc.In the description of first embodiment, the information stores of describing as Given information is externally in the memory device 1007, and is loaded among the RAM1002 when needing.
Storage media drive 1008 reads program or the data in the recording medium that is stored in CD-ROM for example or DVD-ROM according to the indication of CPU1001, and program or the data that read are outputed in RAM 1002 or the External memory equipment 1007.
Interface 1009 comprises: for example the analog video port of IEEE1394 standard or digital video port are used for connecting and look down visual angle photography video camera 18; RS-233C or USB serial port are used for connection orientation sensor 14; Ethernet port is used for the position and the orientation of target object 13 are outputed to the outside.The input data are loaded into RAM 1002 by interface 1009.The partial function of image input block 16 is realized by interface 1009.
Bus 1010 is used to connect CPU101, RAM1002, ROM1003, keyboard 1004, mouse 1005, display unit 1006, External memory equipment 1007, storage media drive 1008 and interface 1009.
Fig. 3 represents the process flow diagram of the processing of bearing prediction unit 15.Carry out and handle the software program that makes CPU 1001 carry out bearing prediction unit 15.In stage before below carrying out, handling, should be loaded among the RAM1002 in advance with this process flow diagram corresponding programs code.
Though the method in various expressions orientation has been arranged, in the present embodiment, has represented the orientation with 3 * 3 rotation matrix R.
At step S300, the orientation values R of measurement #( #Be the symbol of expression sensor-measured value) be imported into the bearing prediction unit 15 from aspect sensor 14.
At step S301, position angle-drift-error correction value φ *Be imported into the bearing prediction unit 15 from data storage cell 17.
At step S302, by the orientation values R that will measure #The orientation of aspect sensor 14 (expression) is updated to orientation from aspect sensor 14 to the conversion in the orientation of target object 13, and according to position angle-drift-error correction value φ *Drift error is proofreaied and correct, and bearing prediction unit 15 calculates the orientation values R of the orientation of target object 13 as prediction *
R *=ΔR(φ *)·R #·R SC(1)
Wherein Δ R (φ) is illustrated in the rotation matrix that increases rotation φ on the azimuth direction, and is defined as the function of φ by following expression.
ΔR ( φ ) = l 1 l 1 ( 1 - cos φ ) + cos φ l 2 l 1 ( 1 - cos φ ) - l 3 sin φ l 3 l 1 ( 1 - cos φ ) + l 2 sin φ l 1 l 2 ( 1 - cos φ ) + l 3 sin φ l 2 l 2 ( 1 - cos φ ) + cos φ l 3 l 2 ( 1 - cos φ ) - l 1 sin φ l 1 l 3 ( 1 - cos φ ) - l 2 sin φ l 2 l 3 ( 1 - cos φ ) + l 1 sin φ l 3 l 3 ( 1 - cos φ ) + cos φ - - - ( 2 )
Wherein " l=(l 1, l 2, l 3) " known vector of the vertical orientations (with terrestrial gravitation orientation opposite) of expression indication in world coordinate system, R ScExpression is used for 3 * 3 matrixes with the bearing change of object coordinates system (the expression position of the target object 13 and coordinate system in the orientation) orientation in the sensor coordinates system (representing the position of aspect sensor 14 and the coordinate system in orientation), and is set in advance according to the fixed value of the relative orientation between aspect sensor 14 and the target object 13 and is given value.
At step S303, bearing prediction unit 15 outputs to the orientation values R of prediction in the data storage cell 17.
At step S304, bearing prediction unit 15 judges whether termination.If the judged result of bearing prediction unit 15 is termination not, then turn back to step S300.
Fig. 4 is the process flow diagram of processing that the parameter in expression position of target object 13 and orientation is calculated.Carrying out this processing makes CPU1001 carry out the program corresponding to the software of position-orientation computing unit 12.In stage before the processing below carrying out, should be loaded among the RAM1002 in advance according to the program code of the process flow diagram shown in Fig. 4.
In position-orientation computing unit 12, four parameters, the i.e. position of target object 13 altogether " t=[x y z] T" and the renewal φ value of the position angle-drift-error correction value of aspect sensor 14 is regarded as the unknown parameter that will calculate.In other words, in first embodiment, the element in not every expression orientation all is considered as the unknown.Suppose that prediction orientation values R only comprises the drift error on the orientation, position angle.Therefore, use a kind of model, wherein the orientation of target object 13 can obtain by the updating value φ that only determines position angle-drift-error correction value.Below, the unknown parameter that obtain is by 4 state of value vector s=[x y z φ] TDescribe.
At step S400, the prediction orientation values R (output of bearing prediction unit 15) of target object 13 is imported into position-orientation computing unit 12 from data storage cell 17.
At step S401, location fix computing unit 12 is with s=[x τ-1y τ-1z τ-10] TBe set to the initial value of state vector s.In this expression formula, x τ-1, y τ-1And z τ-1Be illustrated in the position of the target object 13 that (in τ-1 time) calculates in the previous circulation in step S411.
At step S402, be imported into position-orientation computing unit 12 from data storage cell 17 by the image coordinate of Mark Detection unit 11 detected signs and its object coordinates (coordinate figure in the object coordinates system) group.For example, under the situation of Fig. 1, input picture coordinate u c P1, u c P2And u d P2And corresponding object coordinates x C P1And x C P2
At step S403, position-orientation computing unit 12 judges whether the flag information of input comprises the information that is enough to estimated position and orientation.The judged result of position-orientation computing unit 12 allows to handle branch.Especially, if the sum of the physical token that its image is transfused to is not less than 2, then position-orientation computing unit 12 proceeds to step S404.If the sum of the sign that its image is transfused to is less than 2, then position-orientation computing unit 12 proceeds to step S410.For example, under the situation shown in Fig. 1, handle and proceed to step S404, because detected two signs (though the quantity of projected image is 3, the quantity of physical token is 2).
At step S404, each mark P KmThe estimated value of image coordinate M is calculated by position-orientation computing unit 12.
Figure C20051006936700182
Calculating based in advance as each mark P of Given information storage KmObject coordinates (coordinate figure in the object coordinates system) x C PkmWith the function of current state vector s, this function representation is:
u P k m * = F B ( x C P k m , s ) - - - ( 3 )
Particularly, function F B() comprises expression formula:
x W P k m = x W P k m y W P k m z W P k m = ΔR ( φ ) · R * · x C P k m + x y z - - - ( 4 )
This expression formula is from x C PkmObtain the coordinate x of sign in world coordinate system with s W PkmFollowing expression is expressed as:
x B P k m = x B P k m y B P k m z B P k m = R WB - 1 ( x W P k m - t WB ) - - - ( 5 )
This expression formula is according to world coordinate system x W PkmObtain the coordinate x of sign in looking down the visual angle coordinate B Pkm(look down visual angle photography video camera 18a, 18b, the coordinate system of 18c and 18d, one of them initial point and three axles that intersect vertically are defined within each and look down on the photography video camera of visual angle), and following expression, be expressed as
u P k m * = u x P k m * u y P k m * T = - f x B x B P k m z B P k m - f y B y B P k m z B P k m T - - - ( 6 )
This expression formula is from looking down visual angle coordinate x B PkmObtain image coordinate
Figure C20051006936700187
R is illustrated in the prediction orientation values of step S400 input, and Δ R (φ) is illustrated in the rotation matrix that increases rotationangle on the orientation, position angle, f B xAnd f B yRepresent that respectively each looks down visual angle photography video camera 18a, 18b, 18c and the 18d focal length on X-axis and Y-axis orientation, R WBOn behalf of each, expression look down visual angle photography video camera 18a, 18b, 3 * 3 matrixes in the orientation of 18c and 18d, t WBExpression is described each and is looked down visual angle photography video camera 18a, 18b, and the tri-vector of 18c and the 18d position in world coordinate system, and look down visual angle photography video camera 18a as each, and 18b, the given value of 18c and 18d is stored in advance.
At step S405, position-orientation computing unit 12 calculates each mark P according to following expression formula KmThe estimated value of image coordinate
Figure C20051006936700191
With actual measured value u PkmBetween error delta u Pkm
Δ u P k m = u P k m - u P k m * - - - ( 7 )
At step S406, position-orientation computing unit 12 calculates each mark P KmImage Jacobian about state vector s J us Pkm ( = ∂ u / ∂ s ) . In other words, the image Jacobian is 2 * 4 Jacobi matrixes, has as element, with function F B(with expression formula (3) expression) carries out partial differential and separating of obtaining for each element of state vector S.Especially, position-orientation computing unit 12 calculates 2 * 3 Jacobi matrixes J uxB Pkm ( = ∂ u / ∂ x B ) , Have as element, with the right of expression formula (6) for looking down visual angle photography video camera coordinate x B PkmEach element carry out partial differential and separating of obtaining, 3 * 3 Jacobi matrixes J xBxW Pkm ( = ∂ x B / ∂ x W ) , Have as element, with the right of expression formula (5) for world coordinates x W PkmEach element carry out partial differential and separating of obtaining, 3 * 4 Jacobi matrixes J xWS Pkm ( = ∂ x W / ∂ s ) , Have as element, partial differential is carried out and separating of obtaining for each element of state vector s in the right side of expression formula (4).Position-orientation computing unit 12 calculates J by following expression Us Pkm
J us P k m = J ux B P k m · J x B x W P k m · J x W s P k m - - - ( 8 )
At step S407, position-orientation computing unit 12 is according to the error delta u that calculates in step S405 and S406 PkmWith image Jacobian J Us PkmThe correction value delta s of computing mode vector s.Particularly, position-orientation computing unit 12 generates the mark P that homeotropic alignment wherein obtains to some extent KmError delta u Pkm2M dimension error vector U, the mark P that homeotropic alignment obtains to some extent KmImage Jacobian J Us Pkm2M * 4 matrix Θ, and the value of using the pseudo inverse matrix Θ ' calculating of 2M * 4 matrix Θ to be expressed from the next.
Δs=Θ′U(9)
Under situation shown in Figure 1, because M=3, so U is a six-vector, Θ is 6 * 4 matrixes.
At step S408, position-orientation computing unit 12 by the correction value delta s that in step S407, calculates according to following expression formula update mode vector s.
s+Δs→s?(10)
At step S409, position-orientation computing unit 12 judges to calculate whether restrain, by use a standard for example error in judgement vector U judge perhaps that whether less than predetermined threshold value whether correction value delta s is less than predetermined threshold value.Do not restrain if calculate, the state vector s after then position-orientation computing unit 12 is proofreaied and correct by use re-executes the step of step S404 and back thereof.
If judged result is for calculating convergence in step S409, then at step S410, position-orientation computing unit 12 calculates the orientation of target object 13 from the state vector s that obtains.Particularly, the state vector s that obtains from step in front, position-orientation computing unit 12 obtain the updating value φ of position angle-drift-error correction value and calculate the orientation R of target object 13 by following expression formula:
R=ΔR(φ)·R *(11)
Like this, the orientation R of target object 13 is calculated.
At step S411, position-orientation computing unit 12 outputs to the outside with the position of the target object 13 of acquisition and the information in orientation by interface 1009.Position-orientation computing unit 12 also outputs to the position t of target object 13 in the data storage cell 17.The output format in position and orientation can be 3 * 3 matrix R and the locative tri-vector t in a cover expression orientation, by the model transferring matrix that the element of orientation is carried out Euler angle that conversion obtains, calculates from position and bearing meter, or any additive method that is used to describe position and orientation.
At step S412, by using the updating value φ of the position angle-drift-error correction value obtain in the aforementioned calculation step, location fix computing unit 12 updates stored in position angle-drift-error correction value φ in the data storage cell 17 according to following expression formula *:
φ *+φ→φ *(12)
At step S413, position-orientation computing unit 12 judges whether termination.If the judged result of position-orientation computing unit 12 is termination not, then proceeds to step S400, and in next frame and subsequent frame, the data of input are carried out similarly and handle.
The position and the orientation of above-mentioned processing measurement target object 13.
Though the foregoing description uses a plurality of visual angle photography video camera 18a that look down, 18b, 18c and 18d, but always must not use a plurality of visual angle photography video camera 18a that look down, 18b, 18c and 18d, clearly, even only used one to look down the visual angle photography video camera, also can obtain advantage similar to the above embodiments.According to the foregoing description, even the convex closure of being made up of the sign on the image is very little, the position of object and orientation can be stabilized high accuracy ground, ground and measure.In other words, if use the sign of similar arrangement, then can obtain stable position and orientation.In addition, indicating that arranging less restriction to make can measure polytype object.In addition, can use have wide viewing angle be used to cover wide region more look down the visual angle photography video camera, thereby keep a wide traverse measurement scope.
First modification of first embodiment
In the above-described embodiment, the updating value φ of the position angle-drift-error correction value of aspect sensor is obtained as unknown-value.Yet, when the accuracy of aspect sensor good, and perhaps when the updating value φ of position angle-drift-error correction value can by hand import when, by position-parameter that orientation computing unit 12 obtain can be limited to the position of target object 13 in the time of short service time.Position-direction measuring instrument according to first modification of present embodiment is intended to measure position and the orientation that arbitrary target object to be measured is arranged.This position-direction measuring instrument is designed to change the function according to the position in position-direction measuring instrument of first embodiment-orientation computing unit 12.To be described below position-direction measuring instrument and position-bearing measuring method thereof according to first modification.
In first modification, all the updating value φ among first embodiment are set as 0.In other words, the position in first modification-orientation computing unit 12 is by 3 state of value vector s '=[x y z] TThe unknown parameter that description remains to be obtained.Position in first modification-orientation computing unit 12 can use by will be relevant with updating value φ the item from the treatment step (for example Jacobi matrix and expression formula (4)) of position-orientation computing unit 12, remove resulting.For example, expression formula (4) becomes following expression formula:
x W P k m = x W P k m y W P k m z W P k m = R * · x C P k m + x y z - - - ( 13 )
According to the position-direction measuring instrument of first modification, the quantity of unknown parameter has reduced.Like this, just may expect that the stability of separating (position of target object 13 and orientation) that is obtained further improves.
Updating value φ for manual input position angle-drift-error correction value for example, can further add a correction-value-updating block in the structure shown in Figure 1 to.The input of this correction-value-updating block by the operator obtains the updating value φ of position angle-drift-error correction value, and according to expression formula (12) to being stored in the position angle-drift-error correction value φ in the data storage cell 17 *Upgrade.This correction-value-updating block can use the specific key of keyboard 1004 as interface.For example, correction-value-updating block can be set, thereby make plus sign "+" key be used to be provided with+0.1 updating value, "-" key is used to be provided with-0.1 updating value.Even in as the form among first embodiment, wherein obtain the updating value φ of position angle-drift-error correction value according to image information, significantly, can use a craft-input correction-value-updating block jointly.
Second modification of first embodiment
In first embodiment and first modification thereof, the parameter that obtains as unknown-value is limited to the updating value φ of position angle-drift-error correction value or only is the position.Yet, be considered as that unknown parameter always do not need to fix.If necessary, according to the characteristic of parameter, the parameter by unknown parameter is regarded in change as can obtain the estimation in better position and orientation.Position-direction measuring instrument according to second modification is intended to measure position and the orientation that arbitrary target object to be measured is arranged.Be designed to change function according to the position-direction measuring instrument of second modification according to the position in position-direction measuring instrument of first embodiment-orientation computing unit 12.Position-direction measuring instrument and its position-bearing measuring method according to second modification will be described below.
Position in second modification-orientation computing unit has the pooling function of the position-orientation computing unit in first modification of the position-orientation computing unit among first embodiment and first embodiment.Usually carry out the processing of position-orientation computing unit in second modification according to the position-direction measuring instrument of second modification, wherein only the use location as unknown parameter.In addition, with time interval of rule (for example, per 10 seconds are (300 frame) once), carry out the processing of the position-orientation computing unit 12 among first embodiment according to the position-direction measuring instrument of second modification, wherein the updating value φ of user's parallactic angle-drift-error correction value do and the position as unknown parameter.Preferably, the drift characteristic of the time interval of renewal position angle-drift-error correction value according to position-orientation computing unit 12 is set up, and preferably, the time interval can be provided with by operator's interactively operation.
Position-direction measuring instrument according to second modification, when having used its accuracy that can obtain to ignore position angle-drift-error at short notice of an aspect sensor to be used as aspect sensor 14, resulting stability of solution is enhanced in the time of can being desirably in the correct azimuth angle drift error.
The 3rd modification of first embodiment
In first embodiment and above-mentioned modification, obtain the updating value φ of position angle-drift-error correction value according to the image information of certain time.Position angle-drift-error amount between the frame has high correlation.Like this, by using the information of a plurality of frames, can obtain position angle-drift-error amount with higher accuracy.Position-direction measuring instrument according to the 3rd modification is intended to measure position and the orientation that arbitrary target object to be measured is arranged.Be designed to change the function of the position-orientation computing unit 12 in first embodiment according to the position-direction measuring instrument of the 3rd modification.Position-direction measuring instrument and position-bearing measuring method thereof according to the 3rd modification are described below.
Position in the 3rd modification-orientation computing unit 12 has the position-orientation computing unit 12 among first embodiment and the pooling function of the position-orientation computing unit in second modification, and the parameter estimation of carrying out in first embodiment and second modification is handled.Fig. 5 represents to calculate the process flow diagram of processing of the parameter in the position of expression target object 13 and orientation.In stage below carrying out before the step, should be loaded among the RAM1002 in advance according to the program code of process flow diagram.
At step S500, similar to the step S400 among first embodiment, the prediction orientation values R of target object 13, i.e. the output of bearing prediction unit 15 is imported into position-orientation computing unit 12 from data storage cell 17.
At step S501, S402 is similar to step, is imported into position-orientation computing unit 12 from data storage cell 17 by cover image coordinate and a world coordinates of Mark Detection unit 11 detected signs.
At step S502, by using position t=[xy z] as the target object 13 of unknown parameter TAnd the updating value φ of position angle-drift-error correction value, position-orientation computing unit 12 is at similar step S401 and S403 estimated position T and updating value φ in the processing of S409.
At step S503, the updating value φ of position angle-drift-error correction value that position-orientation computing unit 12 will calculate in step S502 summation obtains and φ SUM
At step S504, position-orientation computing unit 12 judges whether to finish the summation on the frame (for example 30 frames) at predetermined quantity.If finished summation, then position-orientation computing unit 12 proceeds to step S505.If no, then position-orientation computing unit 12 proceeds to step S508.
At step S505, in position-orientation computing unit 12, by will in step S503, obtain and φ SUMDivided by number of frames, calculate the mean value of updating value of position angle-drift-error correction value and this mean value new updating value φ as position angle-drift-error correction value.Then, and φ SUMBe cleared.
At step S506, in position-orientation computing unit 12, similar to the step S412 among first embodiment, by using the updating value φ of the position angle-drift-error correction value in step S505, obtain, according to expression formula (12) to being stored in the position angle-drift-error correction value φ in the data storage cell 17 *Upgrade.
At step S507, in position-orientation computing unit 12, by using the updating value φ of the position angle-drift-error correction value in step S505, obtain, calculate the orientation of target object 13 and the orientation that calculated is used as new prediction orientation values according to expression formula (11).
At step S508, in position-orientation computing unit 12, the position t=[x y z of target object 13] TBe used as unknown parameter, and estimated by the processing that is similar to first modification.
At step S509, similar to the step S411 among first embodiment, the position-position of orientation computing unit 12 export target objects 13 and the information in orientation.
At step S510, position-orientation computing unit 12 judges whether termination.If position-orientation computing unit 12 has been determined not termination, then get back to step S500 and the data of (τ+1 time) input after the next frame are carried out similarly processing.
In above-mentioned processing, multiframe information is used to realize the raising of accuracy of the updating value of position angle-drift-error.Though, in the 3rd modification, used the mean value of the updating value that in frame, obtains, the intermediate value of the updating value in frame also can be used, and any other low-pass filter can use.
Second embodiment
Position according to a second embodiment of the present invention-direction measuring instrument is measured the position and the orientation of camera head.Position-direction measuring instrument and position-bearing measuring method thereof according to second embodiment are described below.
Fig. 6 represents the structure according to position-direction measuring instrument of second embodiment.As shown in Figure 6, position-direction measuring instrument 100 comprises looks down visual angle photography video camera 180a, 180b, 180c and 180d, image input block 160, data storage cell 170, Mark Detection unit 110, aspect sensor 140, bearing prediction unit 150 and position-orientation computing unit 120.Position-direction measuring instrument 100 is connected to camera head 130.
On a plurality of positions of realistic space, the position x in world coordinate system W QkBe known a plurality of sign Q k(k=1 ..., K Q) (below be called " subjective point of view sign ") be used as the sign of being observed by camera head 130 and be placed.Here, world coordinate system is by an initial point and three axle definition that intersect vertically in scene.On a plurality of positions on aspect sensor 140 and/or the camera head 130, the position x in the object coordinates system C PkBe known mark P k(k=1 ..., K P) (back is called " look down visual angle sign ") be provided so that by looking down visual angle photography video camera 180a, 180b, 180c and 180d observe.Here, the object coordinates system is by an initial point and three axle definition that intersect vertically on the camera head 130.
Preferably, settle these signs so that, on each point in camera head is positioned in the measurement range in measuring position and orientation, by the observed subjective point of view sign in the subjective point of view image of camera head 130 acquisitions with by looking down visual angle photography video camera 180a, the sum that 180b, observed in looking down the visual angle image (physics) that 180c and 180d obtain look down the visual angle sign equals 2 at least.In the situation shown in Fig. 6, be provided with three subjective point of view Q 1, Q 2And Q 3And two looked down the visual angle mark P 1And P 2, and three subjective point of view sign Q 1, Q 2And Q 3In, two subjective point of view sign Q 1And Q 3Be included in camera head 130 within sweep of the eye.Look down the visual angle mark P 1Be included in and look down visual angle photography video camera 180c within sweep of the eye, look down the visual angle mark P 2Be included in and look down visual angle photography video camera 180c and 180d within sweep of the eye.
Subjective point of view sign and Q kWith look down the visual angle mark P kCan by, the spherical or circular mark that for example has different colours is formed, and perhaps is made up of the unique point of the physical feature that for example has the different structure feature.Subjective point of view sign Q kWith look down the visual angle mark P kCan have any form, as long as can detect the image coordinate of the projected image on the image of seizure, and each sign can be identified in some way and gets final product.Subjective point of view sign Q kWith look down the visual angle mark P kCan be provided with wittingly or can have and be not the shape of the nature of setting intentionally.
Image (back is called " subjective point of view image ") by camera head 130 outputs is imported in position-direction measuring instrument 100.
Look down visual angle photography video camera 180a, 180b, 180c and 180d be placed on a certain position regularly so that, when camera head 130 is positioned at measurement range, look down visual angle photography video camera 180a, 180b, a photography video camera among 180c and the 180d can be caught the image of camera head 130.Each looks down visual angle photography video camera 180a, and 180b, 180c and 180d position and the orientation in world coordinate system should be stored in the data storage cell 170 as given value in advance.By looking down visual angle photography video camera 180a, 180b, the image (back is called " looking down the visual angle image ") of 180c and 180d output is imported in the image input block 160.
Be input to the subjective point of view image in position-direction measuring instrument 100 and look down the visual angle image and be transformed to numerical data by image input block 160.Image input block 160 stores numerical data in the data storage cell 170 into.
Aspect sensor 140 is erected on the camera head 130.Aspect sensor 140 is measured its current orientation and the orientation of measuring is outputed in the bearing prediction unit 150.Aspect sensor 140 is based on for example sensor unit of gyrostatic angular rate sensor, and forms by the TISS-5-40 that is for example made by the Tokimec company of Japan or by the InertiaCube2 that the InterSense company of the U.S. makes.The measurement orientation values that is obtained by the sensor is different from true bearing and has error.Above-mentioned aspect sensor has some acceleration sensors as its parts, is used to observe gravity side's direction of the earth, and has the function of eliminating the accumulation of drift error on the orientation, pitch angle.Like this, above-mentioned aspect sensor has the characteristic that on the direction of pitch angle (angle of pitch and angle of oscillation) do not produce drift error.In other words, the sensor have along with time integral about drift error around the deflection (crab angle) of gravity axis.
Bearing prediction unit 150 is from data storage cell 170 take over partys parallactic angle-drift-error correction value φ, by proofreading and correct the orientation of camera head 130 is predicted, and the orientation of prediction is outputed in the data storage cell 170 from the measurement orientation values of aspect sensor 140 inputs.
The subjective point of view image that is input to Mark Detection unit 110 from data storage cell 170 is detected with the image coordinate of the sign of looking down the visual angle image and taking in the image of input.For example, when sign was made up of the mark with different colours, from looking down the zone of visual angle image detection corresponding to marker color, and their centre of gravity place was used as detected marker coordinates.When sign was made up of the unique point with different structure feature, the position of sign was detected by carrying out template matches based on the template image of the sign of storing in advance as Given information.According to by 120 outputs of position-orientation computing unit and be stored in the camera head 130 in the data storage cell 170 the position calculated value and by 150 outputs of bearing prediction unit and be stored in the predicted value in the orientation of the camera head 130 in the data cell 170, the position by each sign of prediction in image can dwindle the hunting zone.Select according to this, can reduce the error-detecting and the wrong identification of required calculated amount of Mark Detection and sign.
Mark Detection unit 110 outputs to data storage cell 170 with the image coordinate and the identifier thereof of detected sign.In the following description, by use distribute to each detected sign identifier n (n=1 ..., N), use Q KnBe illustrated in detected sign in the subjective point of view.In addition, by use photography video camera identifier x (x=a, b, c and d) and distribute to each detected sign identifier m (m=1 ..., M x), use P KxmBe illustrated in and look down detected sign in the image of visual angle.Letter N is illustrated in the quantity of detected sign in the subjective point of view image, alphabetical M xBe illustrated in the quantity that each looks down detected sign in the image of visual angle.Letter M is illustrated in the sum of looking down detected sign in the image of visual angle.In addition, detected sign Q KnImage coordinate u QknExpression, detected mark P KxmImage coordinate according to the identifier u that looks down the visual angle photography video camera that catches image a Pkam, u b Pkbm, u c PkcmAnd u d PkdmExpression.For example, in situation shown in Figure 6, N=2, M a=0, M b=0, M c=2, M d=1, M=3.Therefore, sign identifier (k 1=1, k 2=3, k C1=1, k C2=2, k D1=2), catch the identifier of looking down the visual angle photography video camera and the corresponding image coordinate (u of sign image Qk1, u Qk2, u c Pkc1, u c Pkc2And u d Pkd1) be output.
The predicted value in the orientation of camera head 130, by the image coordinate u of Mark Detection unit 110 detected subjective point of view signs QknWith world coordinates x W Qkn, and the image coordinate u that looks down the visual angle photography video camera Pkam, u Pkbm, u Pkcm, u PkdmAnd object coordinates (coordinate figure in the object coordinates system) x C Pkam, x C Pkbm, x C Pkcm, x C PkdmBe imported into position-orientation computing unit 120 from data storage cell 170.Position-orientation computing unit 120 according to the position of above-mentioned information calculations camera head 130 and orientation and with the position of calculating and orientation by the output of interface (not shown).In addition, position-orientation computing unit 120 outputs to the position of calculating in the data storage cell 170, and the updating value of the position angle-drift-error correction value by user's level sensor 140 updates stored in the position angle-drift-error correction value in the data storage cell 170, and this updating value is from the position of calculating camera head 130 and the processing in orientation.
The various types of data of data storage cell 170 storages, position angle-drift-error correction value for example. from the image of image input block 160 inputs, the predicted value in the orientation of 150 inputs from the bearing prediction unit, from the position-calculated location value of orientation computing unit 120 input, the image coordinate and the identifier thereof of the sign of 110 inputs from the Mark Detection unit, world coordinates as the subjective point of view sign of given value, look down the object coordinates (coordinate figure in the object coordinates system) of visual angle sign and look down visual angle photography video camera 180a, 180b, the photography video camera parameter of 180c and 180d.If necessary, then various types of data are inputed or outputed from data storage cell 170.
In image input block 160 in Fig. 6, data storage cell 170, Mark Detection unit 110, bearing prediction unit 150 and position-orientation computing unit 120 each can be considered as independent equipment.Alternatively, by being installed on one or more computing machine as each unit of software and using the CPU of each computing machine to carry out this software, can realize the function of this unit.In a second embodiment, image input block 160, data storage cell 170, Mark Detection unit 110, bearing prediction unit 150 and position-orientation computing unit 120 is as the software of being carried out by single computer.Fig. 1 block scheme represents to carry out each the basic structure of computing machine of image input block 150, data storage cell 170, Mark Detection unit 110, bearing prediction unit 150 and position-orientation computing unit 120 as software.
Fig. 7 represents the process flow diagram of the processing of bearing prediction unit 150.Carrying out this processing makes CPU1001 carry out the software program of bearing prediction unit 150.In stage before carrying out ensuing processing, should be loaded among the RAM1002 in advance according to the program code of process flow diagram.
The method that various expressions orientation is arranged.In a second embodiment, the orientation is represented with 3 * 3 rotation matrix R.
At step S3000, the orientation values R of measurement #( #Expression sensor-measured value) is imported into the bearing prediction unit 150 from aspect sensor 140.
At step S3010, position angle-drift-error correction value φ *Be imported into the bearing prediction unit 150 from data storage cell 170.
At step S3020, by the orientation values R that will measure #The orientation of aspect sensor 140 (expression) substitution from the orientation of aspect sensor 140 to the conversion in the orientation of camera head 130, and according to position angle-drift-error correction value φ *Drift error is proofreaied and correct, and bearing prediction unit 150 calculates the orientation values R of the orientation of camera head 130 as prediction *
R *=ΔR(φ *)·R #·R SC(14)
Wherein Δ R (φ) is illustrated in the rotation matrix that increases rotation φ on the azimuth direction, and is defined as the function of φ by following expression.
ΔR ( φ ) = l 1 l 1 ( 1 - cos φ ) + cos φ l 2 l 1 ( 1 - cos φ ) - l 3 sin φ l 3 l 1 ( 1 - cos φ ) + l 2 sin φ l 1 l 2 ( 1 - cos φ ) + l 3 sin φ l 2 l 2 ( 1 - cos φ ) + cos φ l 3 l 2 ( 1 - cos φ ) - l 1 sin φ l 1 l 3 ( 1 - cos φ ) - l 2 sin φ l 2 l 3 ( 1 - cos φ ) + l 1 sin φ l 3 l 3 ( 1 - cos φ ) + cos φ - - - ( 15 )
Wherein " l=(l 1, l 2, l 3) " expression is used in reference to the vertical orientations (opposite with the terrestrial gravitation orientation) that is shown in the world coordinate system, R SCExpression is used for the orientation is transformed to 3 * 3 matrixes of sensor coordinates system (position of expression the aspect sensor 140 and coordinate system in orientation) from object coordinates system (the expression position of the camera head 130 and coordinate system in orientation), and sets in advance according to the fixed value of relative orientation between aspect sensor 140 and the camera head 130 and to be given value.
At step S3030, bearing prediction unit 150 is with the orientation values R of prediction *Output in the data storage cell 170.
At step S3040, bearing prediction unit 150 judges whether termination.If the judged result of bearing prediction unit 150 is termination not, then turn back to step S3000.
Fig. 8 is calculation process is carried out in expression to the parameter in the position of camera head 130 and orientation a process flow diagram.Carry out this processing so that CPU1001 carries out the software program corresponding to location fix computing unit 120.In stage before the processing below, should be loaded onto among the RAM1002 in advance according to the program code of above-mentioned process flow diagram.
Calculate in the unit 120 at the position bearing meter, altogether four parameters, the i.e. position of camera head 130 " t=[x y z] T" and the updating value φ of the position angle drift error corrected value of aspect sensor 140 is considered as the unknown parameter that will calculate.In other words, in a second embodiment, the element in not every expression orientation all is considered as the unknown.Suppose that prediction orientation values R only comprises the drift error on the orientation, position angle.Therefore, use a kind of model, wherein the orientation of camera head 130 can obtain by the updating value φ that only determines position angle drift error corrected value.Below, the unknown parameter that obtain is by 4 state of value vector s=[x y z φ] TDescribe.
At step S4000, the prediction orientation values R (output of bearing prediction unit 150) of camera head 130 is imported into the location fix computing unit 120 from data storage cell 170.
At step S4010, location fix computing unit 120 is with s=[x τ-1y τ-1z τ-10] TBe set to the initial value of state vector s.In this expression formula, x τ-1, y τ-1And z τ-1Be illustrated in the position of the camera head 130 that (in τ-1 time) calculates in the previous circulation in step S4110.
At step S4020, be imported into the location fix computing unit 120 from data storage cell 170 by the image coordinate of Mark Detection unit 110 detected subjective point of view signs and object coordinates and by Mark Detection unit 110 detected image coordinate and its object coordinates (coordinate figure in the object coordinates system) of looking down the visual angle sign.For example, under the situation of Fig. 6, the image coordinate u of subjective point of view sign Q1And u Q3With corresponding world coordinates x W Q1Andx W Q3And image coordinate u c P1, u c P2And u d P2With corresponding object coordinates x C P1And x C P2Be transfused to.
At step S4030, location fix computing unit 120 judges whether the input flag information comprises the information in enough estimated positions and orientation.The judged result of location fix computing unit 120 allows to handle branch.Particularly, if the sum of the physical token that its image is transfused to is not less than 2, then location fix computing unit 120 advances to step S4040.If the sum of the sign that its image is transfused to is less than 2, then location fix computing unit 120 advances to step S4100.For example, under the situation shown in Fig. 8, it is detected that two subjective point of view signs and two look down visual angle sign (though the quantity of projected image is 3, the quantity of physical token is 2), so add up to 4.Therefore, processing advances to step S4040.
At step S4040, according to current state vector s, location fix computing unit 120 calculates each the subjective point of view sign Q that imports in step S4020 KnWith look down the visual angle mark P KmThe estimated value of image coordinate.
Each subjective point of view sign Q KnThe estimated value of image coordinate
Figure C20051006936700311
By using each Q KnWorld coordinates x W QknFollowing function calculation with current state vector s:
u Q k n * = F C ( x W Q k n , s ) - - - ( 16 )
Particularly, function F C() comprises following expression formula:
x C Q k n = x C Q k n y C Q k n z C Q k n = ( ΔR ( φ ) · R * ) - 1 ( x W Q k n - x y z ) - - - ( 17 )
This expression formula is according to x W QknObtain the coordinate x of in the object coordinates system (by the object coordinates system of camera head 130 definition) with state vector s C QknFollowing expression is expressed as:
u Q k n * = u x Q k n * u y Q k n * T = - f x C x C Q k n z C Q k n - f y C y C Q k n z C Q k n T - - - ( 18 )
This expression formula is from x C QknObtain image coordinate and Wherein R is illustrated in the prediction orientation values of step S4000 input, and Δ R (φ) is illustrated in the rotation matrix that increases rotation φ on the orientation, position angle, f C xAnd f C yBe illustrated respectively in the X-axis of camera head 130 and the focal length on the Y-axis orientation, it is stored as given value in advance.
Each looks down the visual angle mark P KmThe estimated value of world coordinates
Figure C20051006936700324
By using each P KmObject coordinates x C PkmThe following function calculation of (coordinate figure in the object coordinates system) and state vector s.
u P k m * = F B ( x C P k m , s ) - - - ( 19 )
Particularly, function F B() comprises following expression formula:
x W P k m = x W P k m y W P k m z W P k m = ΔR ( φ ) · R * · x C P k m + x y z - - - ( 20 )
This expression formula is from x C PkmObtain the coordinate x of sign with state vector s at world coordinates W PkmFollowing expression is expressed as:
x B P k m = x B P k m y B P k m z B P k m = R WB - 1 ( x W P k m - t WB ) - - - ( 21 )
This expression formula is from world coordinate system x W PkmObtain sign at the coordinate x that looks down in the visual angle photography video camera coordinate (look down visual angle photography video camera 180a, 180b, the coordinate system of 180c and 180d, one of them initial point and three axles that intersect vertically are defined within each and look down on the photography video camera of visual angle) B PkmFollowing expression:
u P k m * = u x P k m * u y P k m * T = - f x B x B P k m z B P k m - f y B y B P k m z B P k m T - - - ( 22 )
This expression formula is from x B PkmObtain image coordinate F wherein B xAnd f B yBe illustrated respectively in and look down visual angle photography video camera 180a, 180b, the focal length on the X-axis of 180c and 180d and the Y-axis orientation, R WBExpression is described each and is looked down visual angle photography video camera 180a, 180b, 3 * 3 matrixes in the orientation of 180c and 180d, t WBExpression is described each and is looked down visual angle photography video camera 180a, 180b, and the tri-vector of 180c and the 180d position in world coordinate system is looked down visual angle photography video camera 180a as each in advance, 180b, the given value storage of 180c and 180d.
At step S4050, for each sign (subjective point of view sign and look down visual angle sign), location fix computing unit 120 calculates error delta u between estimated value and the actual measured value by using following expression QknWith Δ u Pkm:
Δ u Q k n = u Q k n - u Q k n * - - - ( 23 )
Δ u P k m = u P k m - u P k m * - - - ( 24 )
At step S4060, for each sign (subjective point of view sign and look down visual angle viewpoint sign), the image Jacobian that location fix computing unit 120 calculates about state vector s.Subjective point of view sign Q KnThe image Jacobian be 2 * 4 Jacobi matrixes J us Qkn ( = ∂ u / ∂ s ) , Have by partial derivative f about each element of state vector s CSeparating of its element of conduct that () obtains.Particularly, obtaining to have by to about x C QknThe right side of expression formula (18) of each element ask 2 * 3 Jacobi matrixes of separating of its element of conduct that local derviation obtains J ux Qkn ( = ∂ u / ∂ x ) And have by to ask 3 * 4 Jacobi matrixes of separating of its element of conduct that local derviation obtains about the right side of the expression formula (17) of each element of state vector s J xs Qkn ( = ∂ x / ∂ s ) Afterwards, subjective point of view sign Q KnThe image Jacobian according to following expression by using above-mentioned matrix computations:
J us Q k n = J ux Q k n · J xs Q k n - - - ( 25 )
The image Jacobian of looking down the visual angle sign is 2 * 4 Jacobi matrixes J us Pkm ( = ∂ u / ∂ s ) , Have by to about the function F in the expression formula (19) of each element of state vector s B() asks separating of its element of conduct that local derviation obtains.Obtaining to have by to about looking down visual angle photography video camera coordinate x B PkmEach element the right side of expression formula (22) ask 2 * 3 Jacobi matrixes of separating of its element of conduct that local derviation obtains J uxB Pkm ( = ∂ u / ∂ x B ) , Have by to about world coordinates x W PkmThe right side of expression formula (21) of each element ask 3 * 3 Jacobi matrixes of separating of its element of conduct that local derviation obtains J xBxW Pkm ( = ∂ x B / ∂ x W ) And have by to ask 3 * 4 Jacobi matrixes of separating of its element of conduct that local derviation obtains about the right side of the expression formula (20) of each element of state vector s J xWs Pkm ( = ∂ x W / ∂ s ) Afterwards, look down the visual angle mark P KmThe image Jacobian according to following expression by using above-mentioned matrix computations:
J us P k m = J ux B P k m · J x B x W P k m · J x W s P k m - - - ( 26 )
At step S4070, position-orientation computing unit 120 is according to the error delta u and the image Jacobian J that calculate about each sign in step S4050 and S4060 Us, according to the correction value delta s of following expression computing mode vector s:
Δs=Θ′U(27)
Wherein, U represents 2 (N+M) dimension error vector, is expressed as:
Figure C20051006936700351
The error delta u homeotropic alignment that obtains about sign (subjective point of view sign and look down visual angle sign) wherein, Θ represents 2 (N+M) * 4 matrixes, is expressed as
Figure C20051006936700352
Wherein, the image Jacobian homeotropic alignment that obtains about sign (subjective point of view sign and look down visual angle sign).Here the pseudo inverse matrix of Θ ' expression Θ.In situation shown in Figure 6, N=2, M=3.Like this, U is 10 dimensional vectors, and Θ is 10 * 4 vectors.(matrix)?
At step S4080, position-orientation computing unit 120 is by using the correction value delta s calculate in step S4070 according to following expression formula correcting state vector s, and the value that will obtain is as new estimated value.
s+Δs→s(30)
At step S4090, position-orientation computing unit 120 judges to calculate whether restrain, this judgement is finished by using a standard, and for example whether error in judgement vector U perhaps judges correction value delta s whether less than predetermined threshold value less than predetermined threshold value.Do not restrain if calculate, the state vector s after then computing unit 120 uses in position, position are proofreaied and correct re-executes the step of step S4040 and back thereof.
If judged result is for calculating convergence in step S4090, then at step S4100, position, position computing unit 120 calculates the orientation of camera head 130 from the state vector s that obtains.Particularly, the state vector s that obtains from step in front, position-orientation computing unit 120 obtain the updating value φ of position angle-drift-error correction value to calculate the orientation R of camera head 130 by following expression formula:
R=ΔR(φ)·R *(31)
At step S4110, position-orientation computing unit 120 outputs to the outside with the position of the camera head 130 of acquisition and the information in orientation by interface 1009.Position-orientation computing unit 120 also outputs to the position t of camera head 130 in the data storage cell 170.The output format in position and orientation can be 3 * 3 matrix R in a cover expression orientation and locative tri-vector t, viewing transformation matrix by the element of orientation being carried out Euler angle that conversion obtains, calculating from position and bearing meter, or any other is used to describe the method in position and orientation.
At step S4120, by using the updating value φ of the position angle-drift-error correction value obtain in the aforementioned calculation step, position-orientation computing unit 120 updates stored in position angle-drift-error correction value φ in the data storage cell 170 according to following expression formula *:
φ *+φ→φ *(32)
At step S4130, position-orientation computing unit 120 judges whether termination.If the judged result of position-orientation computing unit 120 is termination not, then proceeds to step S4000, and in next frame and subsequent frame, carry out similarly and handle for the data of input.
The position and the orientation of camera head 130 measured in above-mentioned processing.
Position-direction measuring instrument according to second embodiment, if be at least 2 in the quantity of observed sign on the subjective point of view image and the summation of looking down the quantity of observed sign on the image of visual angle, then the position of camera head 130 and orientation can obtain measuring.Therefore, even the subjective point of view image has been covered (with hand or similar approach), also can measure the position and the orientation of camera head 130 according to looking down visual angle image information (at least to two observations of looking down the visual angle sign).On the contrary, even all visual angle signs of looking down have been blocked, also can measure the position and the orientation of camera head 130 according to subjective point of view image information the observation of two subjective point of view signs (at least to).
Though, in a second embodiment, used a plurality of visual angle photography video camera 180a that look down, 180b, 180c and 180d, they are always not necessary.Even using under the single situation of looking down the visual angle photography video camera, significantly, also can obtain with second embodiment in confers similar advantages.
First modification of second embodiment
Second embodiment is intended to measure the position and the orientation of the camera head that moves in the space.Different with second embodiment, be intended to measure the position and the orientation of target object arbitrarily according to the position-direction measuring instrument of first modification of second embodiment.This position-direction measuring instrument is designed to increase by a subjective point of view photography video camera on according to position-direction measuring instrument of second embodiment.Position-the direction measuring instrument and the method thereof of described first modification according to second embodiment will be described below.
Fig. 9 represents the block scheme according to the structure of the position-direction measuring instrument of first modification of second embodiment (representing with label 500).As shown in Figure 9, position-direction measuring instrument 500 comprises looks down visual angle photography video camera 180a, 180b, 180c and 180d, image input block 160, data storage cell 170, Mark Detection unit 110, aspect sensor 140, bearing prediction unit 150, position-orientation detection unit 520 and subjective point of view photography video camera 530.
In position-direction measuring instrument 500 part identical with function among second embodiment use with Fig. 6 in identical label mark.Therefore, saved description of them.The difference of the position-direction measuring instrument 500 and second embodiment is, the image of being caught by subjective point of view photography video camera 530 is used as the subjective point of view image and is input in the image input block 160, the prediction orientation values that is obtained by bearing prediction unit 150 is the orientation of subjective point of view photography video camera 530, and aspect sensor 140 is erected on the subjective point of view photography video camera 530.
Subjective point of view photography video camera 530 is erected on the target object 580 regularly.Position and the orientation of target object 580 in subjective point of view photography video camera coordinate system should be known.
The prediction orientation values R of subjective point of view photography video camera 530, image coordinate and the corresponding subjective point of view photography video camera coordinate thereof of looking down the visual angle by the image coordinate of Mark Detection unit 110 detected each subjective point of view sign and world coordinates and each are imported into position, position detecting unit 520 from data storage cell 170.According to above-mentioned information, position and the orientation of calculating subjective point of view photography video camera 530 are similarly handled by execution and the position-orientation computing unit 120 among second embodiment in position-orientation detection unit 520.In addition, position side-position detecting unit 520 outputs to the position of calculating in the data storage cell 170, and uses the updating value φ from the position angle-drift-error school value of the aspect sensor 140 of the computing in position and orientation that the position angle-drift-error correction value that is stored in the data storage cell 170 is upgraded.
According to position and orientation (in world coordinate system) and position and the orientation of known target object 580 in the photography video camera coordinate system of the described subjective point of view photography video camera 530 that calculates, position-orientation detection unit 520 calculates the position and the orientation of target object 580.Position that calculates and orientation are output to the outside by interface 1009.
Aforesaid way has been realized the measurement to the position and the orientation of arbitrary target object.
In first modification of second embodiment, after position that obtains subjective point of view photography video camera 530 provisionally and orientation, position-orientation detection unit 520 obtains the position and the orientation of target object 580.Yet position-orientation detection unit 520 can directly obtain the position and the orientation of target object 580.In this case, orientation-predicting unit 150 is designed to the position and the orientation (R in expression formula (14) of estimating target object 580 SCBe set to be used for 3 * 3 matrixes of orientation) from the coordinate system transformation of target object to the sensor coordinates system.In addition, the position of target object 580 is set to the element of state vector s, is used for obtaining from state vector s the subjective point of view photography video camera coordinate x of subjective point of view C QknExpression formula (17) substitute with following formula:
x C Q k n = x C Q k n y C Q k n z C Q k n = R CO · ( ΔR ( φ ) · R τ * ) - 1 ( x W Q k n - x y z ) + t CO - - - ( 33 )
Be used to obtain to look down the world coordinates x of visual angle sign W PkmExpression formula (20) substitute with following formula:
x W P k m = x W P k m y W P k m z W P k m = ΔR ( φ ) · R * · R CO - 1 · ( x C P k m - t CO ) + x y z - - - ( 34 )
R wherein COExpression is used for the orientation is transformed to the matrix in the subjective point of view photography video camera coordinate system, t from target object coordinate system (wherein, a point on the target object 580 is defined as initial point, and three axles that intersect vertically are defined as X-axis, Y-axis and Z axle) COExpression is carried out conversion with the position between same coordinate system vector, and should be according to shifting to an earlier date and calculated as the target object 580 of given value storage position and the orientation in subjective point of view photography video camera coordinate system.
In first modification of second embodiment, target object 580 can be for being used to catch the camera head of scene image.In addition, subjective point of view photography video camera 530 placement that can for example make progress is so that have different field ranges, subjective point of view sign Q with the visual field of the camera head that is used to catch scene image kCan correspondingly be placed within the field range of subjective point of view photography video camera 530, for example on the ceiling.This helps alleviating problems such as anamorphose, because subjective point of view sign Q kBe not included in be used to catch scene image camera head within sweep of the eye.In first modification of second embodiment, by a plurality of subjective point of view photography video cameras (identical with subjective point of view photography video camera 530) are erected on the target object 580, the position of the target object of measuring 580 and orientation all have high accuracy aspect position and the orientation.
In second modification of second embodiment
In second embodiment and first modification thereof, in each position-orientation computing unit 120 and position-orientation detection unit 520, the 4 state of value vectors s and the position of expression position angle-drift-error correction value are used as unknown-value, and obtain to make the subjective point of view sign and look down the detected coordinate (actual measured value) that the visual angle indicates and the value that calculates between the state vector s of error minimum.In second modification of second embodiment, used the geometric landmarks restrictive condition.Position-direction measuring instrument according to second modification of second embodiment is characterised in that, uses the position-orientation detection unit of the technology that is different from the total error minimization technique to be included in wherein as parts.
According to the position-direction measuring instrument of second modification of second embodiment basically with similar according to position-direction measuring instrument of second embodiment.Yet in this second modification, the position that position in a second embodiment-computing unit 120 usefulness in orientation are different with it-orientation computing unit 120 ' (not shown) substitutes.In other words, position-orientation computing unit 120 ' processing different with the processing of position-orientation computing unit 120 in a second embodiment.Position-direction measuring instrument and position-bearing measuring method thereof according to second modification of second embodiment are described below.
In second modification of second embodiment, functional unit (image input block 160, data storage cell 170, Mark Detection unit 110, bearing prediction unit 150 and position-orientation computing unit 120 ') is as the software of carrying out in single computer.The basic structure of this computing machine as shown in Figure 2.
Figure 10 for the process of the parameter in the position of calculating expression camera head 130 and orientation and make CPU1001 executing location-orientation computing unit 120 ' the process flow diagram of process of software program.Stage before carrying out later process. the program code according to above-mentioned process flow diagram should be loaded among the RAM1002 in advance.
At step S6000, the prediction orientation values R (output of bearing prediction unit 150) in the orientation of camera head 130 from data storage cell 170 be imported into position-orientation computing unit 120 '.
At step S6003, by the detected image coordinate of looking down visual angle sign in Mark Detection unit and photography video camera coordinate be imported into position-orientation computing unit 120 '.
Look down that visual angle sign is set up or a plurality of when looking down the visual angle photography video camera and being mounted when a plurality of, the projected image of looking down the visual angle sign is detected, thereby the situation of the image coordinate of each image of input occurs.In second modification of second embodiment, even in above-mentioned situation, the quantity of looking down the visual angle sign that is used for subsequent treatment is considered as 1, and position-orientation computing unit 120 ' select a suitable point as the image coordinate up that looks down visual angle mark P.
At step S6006, according to image coordinate up, the parameter of the straight line of the position of looking down the visual angle mark P in position-orientation computing unit 120 ' calculating expression restriction world coordinate system.At first, according to image coordinate up, straight line gradient (position vector) h in world coordinate system x, h yAnd h zCalculate by using following expression:
h x h y h z = R WB · u x P / f x B u y P / f y B 1 - - - ( 35 )
R wherein WBFor be used for representing to detect look down the visual angle mark P look down visual angle photography video camera 180a, 180b, 3 * 3 matrixes in 180c and the 180d orientation in world coordinate system, f B xAnd f B yBe illustrated respectively in the focal length on X-axis and the Y-axis orientation, be stored in advance in the External memory equipment 1007 as given value.In this case, can be used as the function representation of parameter τ in the some expression formula below on the world coordinate system cathetus:
l W ( τ ) = h x h y h z τ + t WB - - - ( 36 )
T wherein WBVisual angle photography video camera 180a is looked down in expression, 180b, and the position of photography video camera in world coordinate system among 180c and the 180d, and be pre-existing in the External memory equipment 1007 as given value.Visual angle photography video camera 180a is looked down in straight-line pass with expression formula (36) expression, 180b, the position of photography video camera in world coordinate system among 180c and the 180d and look down the position of visual angle mark P, and the obtained parameter τ that makes in position that looks down the visual angle mark P gets a suitable value.
Have two parameters, the updating value φ of the position angle-drift-error correction value of the parameter τ look down the position of visual angle mark P in world coordinate system and aspect sensor 140 promptly is set, be regarded as the unknown parameter that will calculate in the following description.In the following description, the unknown parameter that calculate is with 2 state of value vectors s '=[τ φ] TDescribe.
At step S6010, position-orientation computing unit 120 ' with initial value s '=[τ -10] TBe made as state vector s '.In this is provided with, for example, representing the position τ on the straight line of the position that is arranged in a world coordinates of looking down mark P that approaches to obtain from the position of the camera head 130 that last time handle to obtain to be set to τ -1
At step S6020, in Mark Detection unit 110 image coordinate of detected each subjective point of view sign and world coordinates be imported into position-orientation computing unit 120 '.
At step S6030, whether the quantity N of the subjective point of view sign of position-orientation computing unit 120 ' judgement input is at least 1.If the quantity N of subjective point of view sign of input, then handles the renewal of the s ' that proceeds to step S6100 and carry out in the S6090 without execution in step S6040 less than 1.
At step S6040, for each subjective point of view sign Q Kn, calculate the estimated value of image coordinate according to following expression and expression formula (18) This expression formula is from world coordinates x W QknAnd s ' acquisition subjective point of view photography video camera coordinate (coordinate in subjective point of view photography video camera coordinate) x C Qkn:
x C Q k n = x C Q k n y C Q k n z C Q k n = ( ΔR ( φ ) · R * ) - 1 ( x W Q k n - l W ( τ ) ) + x C P - - - ( 37 )
And expression formula (18) is from photography video camera coordinate x C QknObtain image coordinate
Figure C20051006936700431
X wherein C PThe coordinate figure of indicator sign P in subjective point of view photography video camera coordinate system, and be stored in advance in the External memory equipment 1007 as Given information.
In other words, suppose that the position of camera head 130 and orientation obey the previous state vector s ' that obtains, then obtain the estimated image coordinate figure of each subjective point of view sign according to the position between camera head 130 and the subjective point of view sign and position relation.
At step S6050, for subjective point of view sign Q Kn, position-orientation computing unit 120 ' according to the estimated value of expression formula (23) computed image coordinate
Figure C20051006936700432
With actual measured value u QknBetween error delta u Qkn
At step S6060, for subjective point of view sign Q Kn, position-orientation computing unit 120 ' calculating 2 * 2 Jacobi matrixes J us ′ Qkn ( = ∂ u / ∂ s ′ ) , This matrix has as element, about image Jacobian (that is function f in the expression formula (14), of state vector s ' c()) (be made up of expression formula (37) and (18) in second modification among second embodiment, state vector s as s ') each element for s ' carries out partial differential and separating of obtaining.Specifically, have as element in calculating, the right side of expression formula (18) is for photography video camera coordinate x C QknEach element carry out partial differential and 2 * 3 Jacobi matrixes of separating that obtain J ux Qkn ( = ∂ u / ∂ x ) , And have as element, partial differential is carried out for each element of state vector s ' and 3 * 2 Jacobi matrixes of separating that obtain in the right side of expression formula (37) J xs ′ Qkn ( = ∂ x / ∂ s ′ ) Afterwards, the matrix that calculates by use calculates 2 * 2 Jacobi matrixes according to expression formula (25) (but substituting s with s ') J us ′ Qkn ( = ∂ u / ∂ s ′ ) .
At step S6070, position-orientation computing unit 120 ' by using expression formula (27) (substituting s) calculated correction value Δ s ' with s '.In second modification of second embodiment, U represents 2N dimension error vector, wherein for the error delta u of each subjective point of view sign QknHomeotropic alignment, and Θ represents 2N * 2 matrixes, wherein the image Jacobian J that obtains for the subjective point of view sign Us ' QknHomeotropic alignment.
At step S6080, by using the correction value delta s ' in step S6070, calculate, position-orientation computing unit 120 ' according to expression formula (30) (substituting s) correcting state vector s ' with s ', and use appointing as new estimated value of obtaining.
At step S6090, position-orientation computing unit 120 ' by use error vector U for example whether less than predetermined threshold value or correction value delta s ' whether less than the standard of predetermined threshold value, judge to calculate and whether restrain.Do not restrain if calculate, the state vector s ' after then position-orientation computing unit 120 ' use is proofreaied and correct re-executes step S6040 and step subsequently.
If in step S6090, judged result is for calculating convergence, then at step S6100, the orientation of camera head 130, position-orientation computing unit 120 ' calculate from the state vector s ' that obtains.The calculating of orientation R realizes up to the updating value φ of the position angle-drift-error correction value of previous step acquisition and based on expression formula (31) by use.In addition, the calculating of position t is by using the parameter τ that obtains up to previous step and orientation R and realizing based on following expression.
t = l W ( τ ) - R · x C P - - - ( 38 )
At step S6110, the position-orientation computing unit 120 ' position and the orientation of camera head 130 outputed to the outside by interface 1009.Position-orientation computing unit 120 ' also the position t with camera head 130 outputs to data storage cell 170.The Euler angle that the output format in position and orientation can be 3 * 3 matrix R in a cover expression orientation and locative tri-vector t, obtained by the changing location element, the viewing transformation matrix of calculating from position and bearing meter, or any other position and orientation describing method.
At step S6120, by using the updating value φ of the position angle-drift-error correction value obtain in the aforementioned calculation step, position-orientation computing unit 120 ' according to expression formula (32) is to being stored in the position angle-drift-error correction value φ in the data storage cell 170 *Upgrade.
At step S6130, position-orientation computing unit 120 ' judge whether termination.If location fix computing unit 120 ' judged result be termination not, then proceed to step S6000, and in next frame and frame subsequently thereof, the input data carried out similarly and handle.
In above-mentioned processing, have on it from looking down visual angle photography video camera 180a, 180b, the straight line of looking down the visual angle sign that 180c and 180d obtain is used as restrictive condition, under this restrictive condition. can obtain the position of camera head 130 and orientation so that the error minimum in the subjective point of view on the subjective point of view image.
Compare with the measurement result of in a second embodiment location fix, the position in second modification of second embodiment and the measurement result in orientation are inclined to more and are depended on from looking down visual angle photography video camera 180a, 180b, the information that 180c and 180d obtain.Therefore, for example, from looking down visual angle photography video camera 180a, 180b, the reliability of the information that 180c and 180d obtain is than under the high situation of the reliability of the information that obtains from camera head 130, high-resolutionly look down the visual angle photography video camera and when only having a mark with high detection accuracy to exist when existing, according to comparing and more effectively to move among second position-direction measuring instrument of revising of second embodiment and second embodiment.
The 3rd modification of second embodiment
In second embodiment and modification thereof, position-computing unit 120,520 and 120 ' in each obtain updating value as the position angle-drift-error correction value of the aspect sensor 140 of unknown-value.Yet the corrected value of the error of aspect sensor 140 on the orientation of three axles can be obtained and not be needed the correction term in orientation is only limited to azimuth direction.According to the position-direction measuring instrument of the 3rd modification of second embodiment with about the same according to the structure of the position-direction measuring instrument among second embodiment.Therefore, below the different piece among itself and second embodiment is described.
In the 3rd modification of second embodiment, the rotation of data storage cell 170 store directions sensors 140-error correction matrix Δ R *Rather than the position angle-drift-error correction value φ of aspect sensor 140 *
In the 3rd modification of second embodiment, replace position angle-drift-error correction value φ *, the rotation of aspect sensor 140-error correction matrix Δ R *Be imported into bearing prediction unit 150 (step S3010).This bearing prediction unit 150 calculates prediction orientation values R (step S3020) according to following expression rather than expression formula (14):
R *=ΔR *·R #·R SC(39)
In the position-orientation computing unit 120 of the 3rd modification of second embodiment, the position t=[x y z of camera head 130] T3 value expression ω=[ξ ψ ζ] with the orientation of camera head 130 T, promptly always have six parameters and be regarded as the unknown parameter that will calculate.The unknown parameter that will calculate is written as 6 state of value vector s "=[x y z ξ ψ ζ] below T
Though various methods by use 3 value representation orientation (rotation matrix) are arranged, in this modification, with 3 such value vector representation orientation, promptly vector magnitude defines with rotation angle, the orientation of vectorial orientation definition turning axle.Orientation ω can use
R ( ω ) = ξ 2 θ 2 ( 1 - cos θ ) + cos θ ξψ θ 2 ( 1 - cos θ ) - ζ θ sin θ ξζ θ 2 ( 1 - cos θ ) + ψ θ sin θ ψξ θ 2 ( 1 - cos θ ) + ζ θ sin θ ψ 2 θ 2 ( 1 - cos θ ) + cos θ ψζ θ 2 ( 1 - cos θ ) - ξ θ sin θ ζξ θ 2 ( 1 - cos θ ) - ψ θ sin θ ζψ θ 2 ( 1 - cos θ ) + ξ θ sin θ ζ 2 θ 2 ( 1 - cos θ ) + cos θ - - - ( 40 )
Expression, wherein,
θ = ξ 2 + ψ 2 + ζ 2
Like this, orientation ω can represent with 3 * 3 rotation matrix R.Therefore, ω and R can carry out unique conversion mutually.Omit the detailed description that R is transformed to ω here, because this is transformed to known technology.
In the 3rd modification of second embodiment, position-orientation computing unit 120 is "=[x with s -1y -1z -1ξ ψ ζ] TBe made as state vector s " initial value (step S4010).In this expression formula, x -1, y -1And z -1Be illustrated in the position of the camera head 130 that calculated in handling last time, ξ, ψ and ζ represent to express from 3 values that prediction orientation values R obtains.
In the 3rd modification of second embodiment, whether the quantity that position-orientation computing unit 120 indicates according to input is at least 3 by the sum of judging the input sign is carried out branch (step S4030).
In the position-orientation computing unit 120 of the 3rd modification of second embodiment, from the world coordinates x of sign W QknAnd s (is that s ") obtains subjective point of view sign Q in the 3rd modification KnSubjective point of view photography video camera coordinate x C QknExpression formula and from the coordinate x of subjective point of view photography video camera of sign C PkmAnd s (obtains to look down the visual angle mark P for s ") in the 3rd modification KmWorld coordinates x W PkmExpression formula, following expression formula is changed in the expression formula from second embodiment (17) and (20):
x C Q k n = x C Q k n y C Q k n z C Q k n = R ( ω ) - 1 ( x W Q k n - x y z ) - - - ( 41 )
x W P k m = x W P k m y W P k m z W P k m = R ( ω ) · x C P k m + x y z - - - ( 42 )
Therefore, the image Jacobian at each sign is 2 * 6 Jacobi matrixes J us ′ ′ Qkn ( = ∂ u / ∂ s ′ ′ ) .
In the 3rd modification of second embodiment, position-orientation computing unit 120 uses the state vector s that obtains and " calculates the orientation R of camera head 130 based on expression formula (40).
In the 3rd modification, at step S4120, position-orientation computing unit 120 uses the rotation-error correction matrix Δ R of the orientation R of the camera head 130 that obtains based on following expression computer azimuth sensor 140 in the aforementioned calculation step *:
R·R *-1·ΔR *→ΔR *(43)
And the numerical value that is stored in the data storage cell 170 upgraded.
Above-mentioned processing is measured the position and the orientation of camera head 130.
The 4th modification of second embodiment
In second embodiment and modification thereof, look down visual angle photography video camera 180a by what use was fixed to world coordinate system, 180b, 180c and 180d are placed in and look down the visual angle mark P on (aspect sensor sets up thereon) camera head 130 kImage be captured.Yet, obtain the position of camera head 130 and the structure in orientation and be not limited to second embodiment and modification thereof.Location fix measuring equipment according to the 4th modification of second embodiment is characterised in that, have a structure, the high viewpoint photography video camera 180 (different and camera head 130) that wherein is fixed on the camera head 130 is caught the high viewpoint mark P that places world coordinate system kImage, rather than use and to look down the visual angle photography video camera.Position-direction measuring instrument and position-bearing measuring method thereof according to the 4th modification of second embodiment are described below.
Figure 11 represents the structure according to the position-direction measuring instrument of the 4th modification of second embodiment.As shown in Figure 11, the position-direction measuring instrument (representing with label 700) according to the 4th modification of second embodiment comprises high viewpoint photography video camera 180a and 180b, image input block 160, data storage cell 770, Mark Detection unit 110, aspect sensor 140, bearing prediction unit 150 and position-orientation computing unit 720.Position-direction measuring instrument 700 is connected to camera head 130.Have with the part of identical function in a second embodiment use with Fig. 6 in identical mark mark.Therefore, saved the description of this part.
Position in subjective point of view photography video camera coordinate system and orientation are known high viewpoint photography video camera 180a and 180b, are placed in regularly on the camera head 130.After, we use term " high viewpoint photography video camera " expression to place having on the camera head 130 to be different from the photography video camera of the field range of camera head 130; The orientation of this photography video camera is not limited to " height " orientation.On a plurality of positions in realistic space, except catching the subjective point of view sign Q of image by camera head 130 k, position x in world coordinate system W PkFor known a plurality of high viewpoint mark P k (k=1.。。, kp) settle as the sign in the image of catching by high viewpoint photography video camera 180a and 180b.Under situation shown in Figure 11, be provided with three subjective point of view sign Q 1, Q 2And Q 3And two high viewpoint mark P 1And P 2, wherein, two subjective point of view sign Q 1And Q 3Be positioned at camera head 130 within sweep of the eye, high viewpoint mark P 1Be positioned at high viewpoint photography video camera 180a and 180b within sweep of the eye, high viewpoint mark P 2Be positioned at high viewpoint photography video camera 180b within sweep of the eye.Under situation shown in Figure 11, about the quantity of detected sign in the subjective point of view image and in each high visual point image the quantity of detected sign, N=2, M a=1, M b=2.Identifier (the k of marker detection unit 110 output identifications 1=1, k 2=3, k A1=1, k B1=1, k B2=2), catch the identifier and the corresponding image coordinate u of the photography video camera of sign image Qk1, u Qk2, u a Pka1, u b Pkb1And u b Pkb2High viewpoint photography video camera 180a and 180b are placed so that when camera head 130 was positioned at measurement range, the photography video camera that high viewpoint photograph is taken the photograph among machine 180a and the 180b can be caught high viewpoint mark P kImage.The position of high viewpoint photography video camera 180a and 180b and orientation should be stored in the data storage cell 770 as given value in advance.The predicted value in the orientation of camera head 130, an image coordinate u who overlaps by Mark Detection unit 110 detected each subjective point of view sign QknWith world coordinates x W QknAnd the image coordinate u of each high viewpoint sign PkmWith corresponding world coordinates x W PkmBe imported into position-orientation computing unit 720 from data storage cell 770.According to these information, position-orientation computing unit 720 is calculated the position of camera heads 130 and orientation and position and the orientation that calculates is outputed to the outside by the interface (not shown).In addition, position-orientation computing unit 720 outputs to the position of the camera head 130 that calculates in the data storage cell 770.In addition, the position angle-drift-error correction value of the aspect sensor 140 of position-orientation computing unit 720 by using the renewal produce in the process in calculating location and orientation is upgraded the position angle-drift-error correction value that is stored in the data storage cell 770.The various types of data of data storage cell 770 storages, for example, position angle-drift-error correction value, image from 160 inputs of image input block, the predicted value in the orientation of 150 inputs from the bearing prediction unit, from the position-positional value that calculates of orientation computing unit 150 input, the image coordinate of the sign of 110 inputs and sign identifier from the Mark Detection unit, world coordinates for the subjective point of view sign of given value, the world coordinates of high viewpoint sign and high viewpoint photography video camera 180a and position and the orientation of 180b in subjective point of view photography video camera coordinate system.If necessary, various types of data are transfused to or are output from data storage cell 770.
The process flow diagram of the process of the parameter in the position of calculating expression camera head 130 and orientation and process flow diagram (Fig. 8) in a second embodiment are much at one in the 4th modification of second embodiment.Below, only describe and different in a second embodiment parts.
At step S4020, replace looking down the image coordinate and the subjective point of view photography video camera coordinate (coordinate figure in subjective point of view photography video camera coordinate system) of visual angle sign, the image coordinate of high viewpoint sign and world coordinates are imported into position-orientation computing unit 720.
At step S4040, each high viewpoint mark P KmThe estimated image coordinate figure
Figure C20051006936700491
Calculating by using the world coordinates x of high viewpoint sign W PkmFinish with the following function of current state vector s:
u P k m * = F D ( x W P k m , s ) - - - ( 44 )
Particularly, function F D() comprises following expression formula:
x C P k m = x C P k m y C P k m z C P k m = ( ΔR ( φ ) · R * ) - 1 ( x W P k m - x y z ) - - - ( 45 )
Be used for from world coordinates x W PkmObtain photography video camera coordinate x with state vector s C Pkm(coordinate in subjective point of view photography video camera coordinate system), expression:
x B P k m = x B P k m y B P k m z B P k m = R CB - 1 ( x C P k m - t CB ) - - - ( 46 )
Be used for from subjective point of view photography video camera coordinate x C PkmObtain high viewpoint photography video camera coordinate x B Pkm(the coordinate of sign in high viewpoint photography video camera coordinate system, in this high viewpoint photography video camera coordinate system, a point on the photography video camera in high viewpoint photography video camera 180a and 180b is defined as initial point, three axles that intersect vertically are defined as X-axis, Y-axis and Z axle), and expression formula (22), be used for from high viewpoint photography video camera coordinate x B PkmObtain image coordinate
Figure C20051006936700504
F wherein B xAnd f B yRepresent the X-axis of each high viewpoint photography video camera 180a and 180b and the focal length on the Y-axis orientation respectively, R CB3 * 3 matrixes in the orientation of each high viewpoint photography video camera 180a and 180b, t are described in expression CBThe tri-vector of the position of each high viewpoint photography video camera 180a and 180b is described in expression, as the given value storage about each high viewpoint photography video camera 180a and 180b.As described above, the position of camera head 130 and orientation.
Though the 4th modification of second embodiment uses a plurality of high viewpoint photography video camera 180a and 180b, but always must not use a plurality of high viewpoint photography video camera 180a and 180b, significantly, even use single high viewpoint photography video camera, also can obtain confers similar advantages in the 4th modification with second embodiment.
To look down the visual angle photography video camera different with using in first modification to the, three modifications of second embodiment, the mechanism of describing in the 4th modification of second embodiment is applicable, and the high viewpoint photography video camera 180a and the 180b that wherein are fixed on camera head 130 can catch the high viewpoint mark P of settling in world coordinate system kImage.
Other modification
Though each the foregoing description and modification thereof have used the gauss-newton method with expression formula (9) or (27) expression, be used for calculating Δ s, but the calculating of correction value delta s always must not use gauss-newton method to realize according to error vector U and matrix Θ.For example can calculate by using Levenberg-Marquardt (LM) method, this method is the known alternative manner of finding the solution nonlinear equation.In addition, can be used in combination the statistical method of estimating as a kind of for example M of strong estimation technique, and any other numerical computation technology can use.
Each the foregoing description and modification thereof use the nonlinear optimization technology to obtain optimum solution (error minimize) at each input picture.Yet the technology according to the error concealment error of the sign on the image is not limited to the nonlinear optimization technology.Even used other technology, the use of this technology can not weaken character of the present invention, in the present invention, according to image information and the image information that the visual angle photography video camera obtains of looking down among second embodiment from obtaining from subjective photography video camera, the position by calculating the target object that will measure and the position angle-drift-error correction value of aspect sensor can pin-point accuracies and stably obtain the position and the orientation of target object.For example expand under the situation of Kalman wave filter in use as the expansion Kalman wave filter and the iteration of known technology, error according to the sign on the image obtains to eliminate separating of error, at J.Park, B.Jiang, and U.Neumann, " Vision-based posecomputation:robust and accurate augmented reality tracking ", Proc.Second International Workshop on Augmented Reality (IWAR ' 99), pp.3-12, in 1999 it is described in detail, by s being defined as the state vector in each the foregoing description and the modification, expression formula (3) or (16) and (19) are defined as observation equation, may constitute the wave filter of advantage with each the foregoing description.
In addition, in the foregoing description and modification thereof, used each sign indicating the sign (back is called " dot mark ") of a cover coordinate.Yet sign is not limited to type of sign.The sign of other type also can use.
For example, each the above embodiments and modification thereof can use a kind of like this mark that is used in the special geometry in known location-direction measuring instrument as the subjective point of view sign and/or look down the visual angle sign (referring to, for example, Takahashi, lshii, Makino, Nakashizu, " VR Intafesu-notameno Tangan-niyoru Chohokei Maka Ichi-shisei-noKoseido Jitsujikan Suiteiho (Method for Real-Time Estimation of thePosition And Orientation of Rectangular Marker through Single View forVR interface) ", Sanjigen Konfarensu 96 Koen Ronbunshu (Three-dimensional Conference ' 96 Collected Papers), pp.167-172,1996).For example, when the quadrilateral mark is used as sign, calculate these values by storage as the world coordinates on the summit of the quadrilateral sign of given value or from position, orientation and the size of mark, and from image, detect the image coordinate on summit, can obtain and in first embodiment and modification thereof, use the similar advantage of advantage that obtains under the situation of four marks.Especially, the structural design of we can say the structural design that has a quadrilateral mark (having id information) on the target object that will measure or have a quadrilateral mark on aspect sensor is a kind of specially suitable form, because the expectation of this structural design has the good accuracy and the identity of carrying out certification mark from image.About the quadrilateral mark, referring to, for example, Kato, M.BillingHurst, Asano, Tachibana, " Maka Tsuiseki-nimotozukuKakuchogenjitsukan Shisutemu-to Sono Kyariburehshon (AugmentedReality System based on Marker Tracking and Calibration Thereof) ", Nippon Bacharu Riarithi Gakkai Ronbunshi (The Virtual Reality Societyof Japan, Collected Papers), vol.4, no.4, pp.607-616,1999.
In addition, the above embodiments and modification thereof can use such linear feature sign (back is called " line style sign "), as the sign that in another known position-direction measuring instrument, uses (referring to, for example, D.G.Low e " Fitting parameterized three-dimensional models to images ", IEEE Transactions on PAMI, vol.13, no.5.pp.441-450,1991).For example, by constituting according to the error delta d that calculates from image measurement value d and state vector s, be used to calculate d as the error vector U of the reference of the distance that is used to estimate the straight line initial point with by with what have as element *Expression formula carry out partial differential for each element of state vector s and 1 * 4 Jacobi matrix that obtains J ds ( = ∂ d / ∂ s ) Constitute matrix Θ, just can with in a kind of and first kind of embodiment and the modification thereof similarly mechanism realize measurement to position and orientation.In addition, by will from linearity sign, dot mark and other sign obtain the summation of sum of errors image Jacobian, can use their feature jointly.Especially, in second embodiment and first, second and the 3rd modification, may use dissimilar signs as the subjective point of view sign with look down the visual angle sign.A preferred example is to use nature line style sign as the subjective point of view sign, uses painted spherical labels as looking down the visual angle sign.
In second embodiment and modification thereof, the quantity of subjective point of view photography video camera 530 is 1.Yet a plurality of subjective point of view photography video cameras can be erected at the measurement that is used for position and orientation on the target object 580.In this case, bearing prediction unit 150 and Mark Detection unit 110 are handled the image of each photography video camera output.In addition, bearing prediction unit 150 and position-orientation detection unit 520 is carried out arithmetic operation by position and the orientation of using target object 580 as reference.Position-orientation detection unit 520 uses among a kind of and each embodiment similarly mechanism that position and orientation are estimated, promptly, constitute state vector s according to the position of target object 580 and position angle-drift-error correction value, according to expression formula (33) (for each photography video camera R COAnd t CODifferent) obtain the sum of errors image Jacobian of each sign from flag information from each image, and accumulate value formation error vector U and the matrix Θ that obtains.
In addition, in first to the 4th modification of second embodiment, the quantity of camera head 130 is 1.Yet even when measuring two camera heads, as in the situation of three-dimensional video-frequency perspective head mounted display, a camera head (for example, being used for the camera head of left eye) in a kind of similar techniques, can be realized the measurement in position and orientation with for referencial use.
Though the foregoing description and modification thereof have used a kind of aspect sensor with position angle drift error, any aspect sensor that only has obvious errors on the orientation, position angle can be used as aspect sensor.By using, for example a kind ofly measure the angle on the vergence direction and measure the aspect sensor of the angle of azimuth direction with geomagnetic sensor with acceleration sensor, the position of target object and orientation can be by with the similar processing of the foregoing description and modification thereof and use the position of using as unknown parameter and the updating value of position angle-drift-error correction value to measure.Yet, because error characteristics are different from the characteristic of position angle drift error in this case, so this situation is not suitable for the 3rd or the 4th modification of first embodiment.In addition, even used the aspect sensor that only is used for the measurement on the vergence direction, by supposing that this aspect sensor is always three aspect sensors of 0 for the measured value on the orientation, position angle, the position of target object and orientation can be measured by similar processing.
By have different light with wavelength of visible light catch the photography video camera of image can be as in each the foregoing description and modification thereof, looking down the visual angle photography video camera.For instance, catch the photography video camera of image by infrared ray and can look down the visual angle photography video camera as each, the sign of emission or reflected infrared can be looked down the visual angle sign as each.The advantage that this situation has in a second embodiment is, because the image of subjective point of view sign is by looking down visual angle photography video camera seizure, so can eliminate the error-detecting of the subjective point of view sign of looking down on the image of visual angle.
In this case, look down the visual angle sign as each, can discern sign by using the different moment to send ultrared mark.In other words, Mark Detection unit 110 extracted zone corresponding to mark from look down the visual angle image after, this regional centre of gravity place can be as with the detection coordinates of sending ultrared sign with looking down that the visual angle photography video camera is caught the identical timing of image.Significantly, when the quantity of looking down the visual angle sign is 1, do not need the luminous timing controlled of carrying out.
In addition, in the first and the 4th modification of second embodiment, be used for can be used as the subjective point of view photography video camera and looking down the visual angle photography video camera, and the sign of emission or reflected infrared can and be looked down the visual angle sign as the subjective point of view sign by the photography video camera that infrared ray carries out picture catching.
In addition, be used for the photography video camera that is not limited to catch by infrared ray image with the photography video camera of the picture catching of the light of visible light different wave length by having, the photography video camera that carries out picture catching by ultraviolet ray or similar light also can use.In addition, also can use by having simultaneously with the light of visible light different wave length with by the photography video camera that visible light carries out picture catching.
Other embodiment
Significantly, in the present invention, the storage medium (or recording medium) that comprises the software program code of the function that realizes the foregoing description and modification is provided on the system or equipment, and the computing machine of this system or equipment (or CPU/MPU) reads and carry out the program code on the storage medium.In this case, the program code self that reads out from storage medium is realized the function of the foregoing description and modification.Therefore, the storage medium that comprises program code comprises in the present invention.In addition, significantly, the present invention includes such a case, be that computing machine is carried out the program code of reading, thereby, except realizing the function of the foregoing description and modification, the operating system of program code indication operation on computers or the processing that similar system is carried out all or part of reality, the function of this processing execution the foregoing description and modification.
In addition, significantly, the present invention includes such a case, promptly, be written in the storer in the interior plug-in card that is loaded into the computing machine or be connected in the interpolation unit of computing machine at the program code that reads out from storage medium, the CPU of program code indication in interior plug-in card or interpolation unit carries out all or part of actual treatment, the function of this processing execution the foregoing description and modification.
When using above-mentioned storage medium among the present invention, be stored in the storage medium corresponding to the said procedure of above-mentioned process flow diagram.
Because but the present invention can have and manyly have very big-difference do not leave the embodiment of the spirit and scope of the present invention, thus be construed as the certain embodiments that the invention is not restricted to wherein, but should define by appending claims.
Though the present invention is illustrated with reference to the embodiment of signal, is construed as, the present invention is not limited to disclosed embodiment.On the contrary, the invention is intended to cover interior various modifications and the equivalent of spirit and scope that is included in appended claims.The scope of following claim should be carried out the wideest explanation so that comprise all such modifications and equivalent configurations and function.
Having of the application's request right of priority: the Japanese patent application No.2004-144893 that submits on May 14th, 2004, the Japanese patent application No.2004-144894 that submits on May 14th, 2004, Japanese patent application No.2005-053441 on February 28th, 2005 submitted to is introduced into as a reference here.

Claims (7)

1. information processing method that is used to calculate the position and the orientation of camera head, described camera head is caught the image of realistic space, and this information processing method comprises:
First width of cloth image input step, first width of cloth image that input is caught by camera head;
Second width of cloth image input step, input is by looking down second width of cloth image that the visual angle camera head is caught, and this is looked down the visual angle camera head and catches the image of camera head from looking down the position, visual angle;
The orientation input step, input is from the orientation values of the measurement of aspect sensor output, and this aspect sensor is measured the information about the orientation of camera head;
First detects step, from first width of cloth image detection imported first width of cloth image input step the first-sign image-coordinate characteristic value about the image coordinate of first sign that places realistic space;
Second detects step, from second width of cloth image detection imported second width of cloth image input step the second-sign image-coordinate characteristic value about the image coordinate that places the sign of second on the camera head; With
Position-orientation calculation procedure, detect the detected first sign image coordinate characteristic value in the step by using first, in second orientation values of measuring that detects the detected second sign image coordinate characteristic value in the step and in the orientation input step, import, calculate the position and the orientation of camera head.
2. according to the information processing method of claim 1, wherein, in the position-the orientation calculation procedure, detect in the step detected first-sign image-coordinate characteristic value and detect detected second-sign image-coordinate characteristic value in the step by using second first, acquisition is calculated the position and the orientation of camera head by using the parameter that is obtained about the parameter of the corrected value of the azimuth angle error of correction aspect sensor and the parameter of expression camera head position.
3. according to the information processing method of claim 1, wherein:
Detected second-sign image-coordinate characteristic value is represented the image coordinate of second sign in the second detection step;
In position-orientation calculation procedure, image coordinate and position and the orientation of looking down the visual angle camera head according to second sign of representing by second-sign image-coordinate characteristic value, acquisition limits the straight line of second mark position in three dimensions, and, be positioned on the straight line under this restrictive condition at second sign, by using the orientation values of measuring that detects detected first-sign image-coordinate characteristic value in the step and in the orientation input step, import first, calculate the position and the orientation of camera head.
4. according to the information processing method of claim 3, wherein, in position-orientation calculation procedure, detect detected first-sign image-coordinate characteristic value in the step by using first, obtain the parameter of the position of second sign on the expression straight line and about the parameter of the corrected value of the azimuth angle error of proofreading and correct aspect sensor, and by using the parameter that is obtained that the position and the orientation of camera head are calculated.
5. information processing method that is used to calculate the position and the orientation of camera head, described camera head is caught the image of realistic space, and described information processing method comprises:
First width of cloth image input step, first width of cloth image that input is caught by camera head;
Second width of cloth image input step, second width of cloth image that input is caught by second camera head, described second camera head is from the image of a viewpoint position seizure realistic space, and described second camera head is arranged in above the described camera head;
The orientation input step, input is from the orientation values of the measurement of aspect sensor output, and described aspect sensor is measured the information about the camera head orientation;
First detects step, from first width of cloth image of importing first width of cloth image input step, detects the first-sign image-coordinate characteristic value about first image coordinate that indicates that places realistic space;
Second detects step, from second width of cloth image of importing second width of cloth image input step, detects the second-sign image-coordinate characteristic value about second image coordinate that indicates that places realistic space;
Position and orientation calculation procedure, detect detected first-sign image-coordinate characteristic value in the step by using first, in second orientation values of measuring that detects detected second-sign image-coordinate characteristic value in the step and in the orientation input step, import, calculate the position and the orientation of camera head.
6. messaging device that is used to calculate the position and the orientation of camera head, described camera head is caught the image of realistic space, and described messaging device comprises:
First width of cloth image input block, first width of cloth image that input is caught by camera head;
Second width of cloth image input block, input is by looking down second width of cloth image that the visual angle camera head is caught, and the described visual angle camera head of looking down is caught the image of this camera head from looking down the position, visual angle;
The orientation input block, input is from the orientation values of the measurement of aspect sensor output, and described aspect sensor is measured the information about the camera head orientation;
First detecting unit from first width of cloth image by the input of first width of cloth image input block, detects the first-sign image-coordinate characteristic value about first image coordinate that indicates that places realistic space;
Second detecting unit from second width of cloth image by the input of second width of cloth image input block, detects the second-sign image-coordinate characteristic value about the image coordinate that places the sign of second on this camera head;
Position-orientation computing unit, by using, calculate the position and the orientation of camera head by the detected first-sign image of first detecting unit-coordinate characteristic value, by the detected second-sign image of second detecting unit-coordinate characteristic value with by the orientation values of measuring of orientation input block input.
7. messaging device that is used to calculate camera head position and orientation, described camera head is caught the image of realistic space, and described messaging device comprises:
First width of cloth image input block, first width of cloth image that input is caught by camera head;
Second width of cloth image input block, second width of cloth image that input is caught by second camera head, described second camera head is from the image of a viewpoint position seizure realistic space, and described second camera head is arranged in above the described camera head;
The orientation input block, input is from the orientation values of the measurement of aspect sensor output, and described aspect sensor is measured the information about the camera head orientation;
First detecting unit from first width of cloth image by the input of first width of cloth image input block, detects the first-sign image-coordinate characteristic value about first image coordinate that indicates that places realistic space;
Second detecting unit from second width of cloth image by the input of second width of cloth image input block, detects the second-sign image-coordinate characteristic value about second image coordinate that indicates that places realistic space; With
Position and orientation computing unit, by using, calculate the position and the orientation of camera head by the detected first-sign image of first detecting unit-coordinate characteristic value, by the detected second-sign image of second detecting unit-coordinate characteristic value with by the orientation values of measuring of orientation input block input.
CNB200510069367XA 2004-05-14 2005-05-13 Information processing method and apparatus for finding position and orientation of targeted object Expired - Fee Related CN100410622C (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP144893/2004 2004-05-14
JP144894/2004 2004-05-14
JP2004144893 2004-05-14
JP053441/2005 2005-02-28

Publications (2)

Publication Number Publication Date
CN1696606A CN1696606A (en) 2005-11-16
CN100410622C true CN100410622C (en) 2008-08-13

Family

ID=35349434

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB200510069367XA Expired - Fee Related CN100410622C (en) 2004-05-14 2005-05-13 Information processing method and apparatus for finding position and orientation of targeted object

Country Status (1)

Country Link
CN (1) CN100410622C (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4960754B2 (en) * 2007-04-25 2012-06-27 キヤノン株式会社 Information processing apparatus and information processing method
EP2320382A4 (en) * 2008-08-29 2014-07-09 Mitsubishi Electric Corp Bird's-eye image forming device, bird's-eye image forming method, and bird's-eye image forming program
CN101877819B (en) * 2009-04-29 2013-03-20 中华电信股份有限公司 Signal source tracking device and tracking method thereof
JP2011203823A (en) 2010-03-24 2011-10-13 Sony Corp Image processing device, image processing method and program
CN102200881B (en) * 2010-03-24 2016-01-13 索尼公司 Image processing apparatus and image processing method
US8855442B2 (en) * 2012-04-30 2014-10-07 Yuri Owechko Image registration of multimodal data using 3D-GeoArcs
JP6462328B2 (en) * 2014-11-18 2019-01-30 日立オートモティブシステムズ株式会社 Travel control system
WO2016093241A1 (en) * 2014-12-09 2016-06-16 旭化成株式会社 Position/orientation detection device and position/orientation detection program
DE102016109153A1 (en) * 2016-05-18 2017-11-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. METHOD FOR ADJUSTING A VIEWPOINT IN A VIRTUAL ENVIRONMENT
CN105913497B (en) * 2016-05-27 2018-09-07 杭州映墨科技有限公司 Virtual reality space movable positioning system for virtually seeing room and method
US10013798B2 (en) * 2016-08-30 2018-07-03 The Boeing Company 3D vehicle localizing using geoarcs
CN107063190B (en) * 2017-03-02 2019-07-30 辽宁工程技术大学 Pose high-precision direct method estimating towards calibration area array cameras image
JP7126066B2 (en) * 2017-06-30 2022-08-26 パナソニックIpマネジメント株式会社 Projection indication device, parcel sorting system and projection indication method
CN107977977B (en) * 2017-10-20 2020-08-11 深圳华侨城卡乐技术有限公司 Indoor positioning method and device for VR game and storage medium
CN107966155A (en) * 2017-12-25 2018-04-27 北京地平线信息技术有限公司 Object positioning method, object positioning system and electronic equipment
CN113495259A (en) * 2020-04-07 2021-10-12 广东博智林机器人有限公司 MEMS scanning mirror deflection angle calibrating device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002228442A (en) * 2000-11-30 2002-08-14 Mixed Reality Systems Laboratory Inc Positioning attitude determining method and device, and recording media
JP2003058896A (en) * 2001-08-10 2003-02-28 Nec Corp Device, method and program for recognizing positioning attitude

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002228442A (en) * 2000-11-30 2002-08-14 Mixed Reality Systems Laboratory Inc Positioning attitude determining method and device, and recording media
JP2003058896A (en) * 2001-08-10 2003-02-28 Nec Corp Device, method and program for recognizing positioning attitude

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A hybrid registration method for outdoor augmented reality. SATOH K ET AL.AUGMENTED REALITY. 2001 *
Robust vision-based registration utilizing bird's-eye view with user's view. SATOH K ET AL.MIXED AND AUGMENTED REALITY. 2003 *

Also Published As

Publication number Publication date
CN1696606A (en) 2005-11-16

Similar Documents

Publication Publication Date Title
CN100410622C (en) Information processing method and apparatus for finding position and orientation of targeted object
EP1596332B1 (en) Information processing method and apparatus for finding position and orientation of targeted object
CN100408969C (en) Method and device for determining position and direction
EP1596330B1 (en) Estimating position and orientation of markers in digital images
CN101116101B (en) Position posture measuring method and device
US8761439B1 (en) Method and apparatus for generating three-dimensional pose using monocular visual sensor and inertial measurement unit
EP1870856B1 (en) Information-processing method and apparatus for calculating information regarding measurement target on the basis of captured images
US7613361B2 (en) Information processing method and device
JP5036260B2 (en) Position and orientation calculation method and apparatus
Oskiper et al. Multi-sensor navigation algorithm using monocular camera, IMU and GPS for large scale augmented reality
CN107862719A (en) Scaling method, device, computer equipment and the storage medium of Camera extrinsic
CN112734841B (en) Method for realizing positioning by using wheel type odometer-IMU and monocular camera
CN108846857A (en) The measurement method and visual odometry of visual odometry
CN115371665B (en) Mobile robot positioning method based on depth camera and inertial fusion
CN113034571B (en) Object three-dimensional size measuring method based on vision-inertia
CN111307146A (en) Virtual reality wears display device positioning system based on binocular camera and IMU
JP4566786B2 (en) Position and orientation measurement method and information processing apparatus
CN114777768A (en) High-precision positioning method and system for satellite rejection environment and electronic equipment
Del Pizzo et al. Reliable vessel attitude estimation by wide angle camera
Panahandeh et al. Vision-aided inertial navigation using planar terrain features
CN116184430B (en) Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit
Giordano et al. 3D structure identification from image moments
CN111539982A (en) Visual inertial navigation initialization method based on nonlinear optimization in mobile platform
Grundmann et al. A gaussian measurement model for local interest point based 6 dof pose estimation
CN113048985B (en) Camera relative motion estimation method under known relative rotation angle condition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080813

Termination date: 20190513

CF01 Termination of patent right due to non-payment of annual fee