CN1674047A - Six freedom visual tracking method and system based on micro machine parallel processing structure - Google Patents
Six freedom visual tracking method and system based on micro machine parallel processing structure Download PDFInfo
- Publication number
- CN1674047A CN1674047A CN 200410017199 CN200410017199A CN1674047A CN 1674047 A CN1674047 A CN 1674047A CN 200410017199 CN200410017199 CN 200410017199 CN 200410017199 A CN200410017199 A CN 200410017199A CN 1674047 A CN1674047 A CN 1674047A
- Authority
- CN
- China
- Prior art keywords
- image
- microcomputer
- thing
- point
- tracking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The present invention relates to one kind of six-freedom visual tracking method and system based on microcomputerized parallel processing structure. The tracking process includes the following operation steps: 1) installing artificial marker onto the tracked target, 2) performing view field acquisition of the scene with the artificial marker with several video cameras to obtain the original image, 3) processing the obtained image in computer, and 4) completing the 3D reconstruction of the spatial position and direction of the artificial marker and outputting the reconstructed result via network communication equipment to various man-computer interaction system. The tracking system includes N microcomputers, N image acquiring systems, N network communication cards and one network exchanger. The method and system of the present invention can realize the precise location and tracking of spatial target and has less influence of electromagnetic interference and other advantages.
Description
Technical field
The present invention relates to visual tracking method and system in augmented reality and the virtual reality (or other man-machine interactive system), particularly a kind of six degree of freedom visual tracking method and system based on the microcomputer parallel processing structure.
Background technology
The locus of the tracking helmet or other objects and direction (six-freedom degree) are gordian techniquies in augmented reality (AugmentedReality) and virtual reality (Virtual Realtiy) system, also are the senior interaction techniques of man-machine interactive system.In augmented reality system and virtual reality system, the purpose of tracking is in order to obtain the locus and the directional information of its viewpoint of user of helmeting, and then obtains the key parameter that virtual scene is played up.This also is the key point of the correct fusion of virtual and real information in the augmented reality system.
The three-dimensional tracking transducer that is installed at present on the helmet is electromagnetic sensor basically, and electromagnetic sensor is subject to the electromagnetic interference (EMI) of surrounding environment, causes the tracking results instability, and then influences the real effect of figure, produces jitter phenomenon; In addition, because transmission of the signal of electromagnetic sensor and electric energy transmitting so generally be wired device, so also can influence the comfortableness of the use of the helmet; Electromagnetic sensor has certain effective range, costs an arm and a leg.Utilize computer stereo vision technique follow the tracks of in the following the tracks of of target object, can obtain the kinematic parameter of degree of precision; Antijamming capability is strong; Need not signal transmssion line and can reduce uncomfortable property; Adopt a plurality of camera chain, can expand following range; Some shortcomings that other three-dimension sensors had have been overcome.But, the operand that needs owing to the target tracking algorism based on principle of computer vision is very big, for the real-time application system, the real-time of following the tracks of is a key issue, in addition, multiple target tracking is an important indicator weighing tracking transducer performance in the man-machine interactive system, realizes that the calculated amount of multiple target tracking meaning system further increases.For guaranteeing the real-time of system, generally adopt special-purpose hardware, as DSP (Digital Signal Processing) chip and the high-performance computer of special use, cost is higher, and programming complexity (when adopting DSP).
Summary of the invention
The problem and shortage of the existence of prior art the purpose of this invention is to provide a kind of six degree of freedom visual tracking method and system based on the microcomputer parallel processing structure, accurate location, the tracking of position, implementation space and direction (six-freedom degree) in view of the above.During the native system operation, the user puts on the helmet (or being installed on other tracing object) that artificial target's thing has been installed, a plurality of camera collection scene images, and by the processing unit of correspondence from motion tracking artificial target thing, and the center of extracting the mark orbicular spot respectively, system chooses the image of two nodes according to qualifications, utilizes the spatial information (position and direction) of principle of stereoscopic vision recovering mark thing.
The present invention realizes by the following technical solutions:
A kind of six degree of freedom visual tracking method, it is characterized in that operation steps is as follows based on the microcomputer parallel processing structure:
1) the tracing object thing is given artificial target's thing;
2) by N platform camera the scene that has the handmarking is carried out video acquisition, obtain original image;
3) original image of Huo Deing is handled in computing machine, and treatment step is:
1. realize multiple goal real-time follow-up algorithm: to the original image that contains the handmarking that photographs, after threshold segmentation binaryzation, seed filling, shape restriction, finding the solution geometric center and geometrical constraint processing, get access to the fundamental lattice of artificial target's thing and the unique point image coordinate of expansion dot matrix; Being converted into standard picture through perspective transform then, according to the location distribution information of expansion dot matrix in the standard picture, is eigenmatrix by Feature Mapping, realizes artificial target's identification; Use artificial target's thing of different pattern, realize multiple target tracking;
2. used for multi-vision visual space vector real-time reconstruction algorithm: the many orders tracking technique and the posture reconstruction algorithm that adopt the multi-cam array, adopted simultaneously and be the multi-eye stereo visual structure that plane grid distributes, the optical axis vertical level of each camera, photocentre is evenly distributed on the net point; The optimum principle of image is in the reconstruction algorithm: if the two nearest width of cloth images at the center of the centre of form range image of mark point set are with selected;
3. based on the parallel processing architecting method of microcomputer: adopted the parallel processing architecting method based on N (N is more than or equal to 2) platform microcomputer, utilization MPI technology has defined N+1 process: a N process that realizes target tracking algorism and has carried out image acquisition and feature point extraction, processing; 1 process is carried out three-dimensional reconstruction (locus and directional information).
4) behind the locus and directional information three-dimensional reconstruction of complement mark thing, reconstructed results is outputed in the various man-machine interactive systems by network communication apparatus.
Six degree of freedom visual tracking method based on the microcomputer parallel processing structure according to claim 1 is characterized in that the characteristics of artificial target's thing are:
1) all pattern point face coplane, promptly the round dot in the pattern is distributed on the one flat plate,
2) radius of each round dot is identical, and radius size is by the usable range in using and can discern the resolution situation and determine;
3) distance up and down in the adjacent round dot center of circle equates, the distance size is by the usable range in using and can discern the resolution situation and determine;
4) wherein conllinear are distinguished in the centers of circle of the two groups of points in edge, and these two lines are orthogonal; Wherein one group comprise count greater than another the group contained counting;
5) color is a white, and the back of the body end is a black.
A kind of system that is used for the described six degree of freedom visual tracking method based on the microcomputer parallel processing structure of claim 1, comprise N platform microcomputer, N cover image capturing system, a N network communication card, a network switch, it is characterized in that having the artificial target's thing that is installed on the tracing object, the camera of image capturing system is aimed at the scene at artificial target's thing place, camera output is input to microcomputer through image pick-up card, and microcomputer is realized interconnected mutually by the network communication card and the network switch.
Below technical scheme of the present invention is done to describe in detail:
A kind of artificial target's thing that is used for the multiple goal real-time follow-up as shown in Figure 1, is installed on the user crown or the helmet (or other tracing object), and the pattern of this artificial target's thing has following characteristics:
1) all pattern point coplane, promptly the round dot in the pattern is distributed on the one flat plate;
2) each round dot is shaped as circle;
3) radius of each round dot is identical, and radius size is by the usable range in using and can discern the decision of resolution situation;
4) center of circle of adjacent round dot distance up and down equates, the distance size is by the usable range in using and can discern the decision of resolution situation;
5) two groups of centers of circle of putting conllinear is respectively wherein arranged, and these two lines are orthogonal; Wherein one group comprises 5 points, and the straight line at place becomes long limit; Another group contains 3 points, and the straight line at place becomes minor face;
6) color is a white, and the back of the body end is a black;
7) 3 points of here that (as shown in Figure 2) in each indicia patterns is right-hand 5 points and below are called frontier point (fundamental lattice), and other points in the dotted line are called interior point (expansion dot matrix); When from background, mark being split, utilization be the information of fundamental lattice; And isolabeling is not number by interior point and ranking and distinguishing.
A kind of algorithm of realizing the multiple goal real-time follow-up, algorithm flow is specially as shown in Figure 2:
The original image that comprises artificial target's thing (as shown in Figure 3) that obtains is taken in natural lighting down, after threshold segmentation binaryzation 1, seed filling 2, shape restriction 3, finding the solution geometric center 4 and geometrical constraint processing 5, get access to the fundamental lattice of artificial target's thing and the unique point image coordinate of expansion dot matrix, unique point is converted into standard picture (as shown in Figure 7) through perspective transform 6; For standard picture, according to the location distribution information of expansion dot matrix, be eigenmatrix by Feature Mapping, the value of element is that 1 expression is had a few in the matrix, the no point of 0 expression is realized the identification of artificial target's thing.Use artificial target's thing of different pattern, just can realize multiple target tracking.Each step details is as follows
1) threshold segmentation binaryzation
The grey scale pixel value of its image of artificial target's thing that native system adopted and the grey scale pixel value of most of background image differ bigger, so the precision of partition threshold is not the key of track algorithm success, and the real-time of algorithm are had relatively high expectations.So in the application of reality, selected fixedly threshold values partitioning algorithm for use, wherein the method that threshold values can be by image histogram (can referring to the relevant books of Flame Image Process) is analyzed and is obtained, and the result after cutting apart as shown in Figure 4.
2) seed filling
The purpose of employing seed filling is and will and extracts through the middle white portion analysis that is communicated with of image (as shown in Figure 4) after the threshold segmentation, native system adopts is the seed fill algorithm (can referring to the relevant books of Flame Image Process) of standard, after the seed filling processing, the black region of the connection in the image is distinguished, and its result formats is the groups of data sets of white pixel dot image coordinate in the image connectivity zone.
3) shape restriction
After obtaining the groups of data sets of connected region, by analysis to connected region, the figure who obtains the zone is than parameter and pixel number, and condition restriction in addition, can remove part interference region or nontarget area, obtain candidate's characteristic area, the round dot that is characterized as in shape figure ratio and pixel number and the tracking target of candidate's characteristic area is approaching, its result as shown in Figure 5, what the point of white was represented among the figure is the geometric center (solution procedure of geometric center is seen shown in the step 4) of candidate region, be referred to as the candidate feature point again, background is a black.
Concrete principle is as follows:
1. the figure compares:
Be defined as the length (Wq) of the minimum boundary rectangle of connected region and the ratio of wide (Hq), shown in 1 formula.
Since the artificial target's article pattern mid point that adopts be shaped as circle, following the tracks of space occurrence positions and attitude when changing when artificial target's thing, the shape of its characteristics of image of round dot in the mark approaches ellipse.The figure of characteristic area than also be less than or approach 1, and figure's ratio in many background interference zone is far smaller than 1.By setting the conditional parameter of figure's ratio, will be eliminated less than the zone of given parameter; Its empirical value scope is between the 0.1-0.8
Physique=min(W
q,H
h)/max(W
q,H
h) (1)
Wherein minimum value is got in min () expression, and maximal value is got in max () expression.
2. pixel number
When the scope of the size of artificial target's article pattern orbicular spot and tracing area is given regularly, the maximal value and the minimum value of the number of pixels that image-region comprised of pattern orbicular spot are predictable, specifically can obtain by analysis mark thing and camera is nearest and the image orbicular spot taken farthest the time is comprised number of pixels.The range constraint of pixel number is set, and the pixel number not connected region in scope will be eliminated, and can get rid of the interference noise of extended background interference region and minimum area like this.
When above-mentioned two conditional parameters are set, conditional request can be provided with comparatively loose, avoid real characteristic area is filtered out.
4) find the solution geometric center
After obtaining candidate's characteristic area, for being for further processing, must find the solution each regional geometric center coordinate, represent the geometric center of candidate region with the point of white then, and be referred to as the candidate feature point, its computing method are shown in 2 formulas:
Wherein: X
CAnd Y
CThe center position coordinates in certain candidate feature zone;
x
iAnd y
iThe image coordinate coordinate of i pixel in certain candidate feature zone, i=0 ..., n;
Certain candidate feature zone of n comprises the sum of pixel;
With the upper left corner of the image true origin as image coordinate system, the X positive dirction is towards the right side, and the Y positive dirction is downward.
5) geometrical constraint is handled
After obtaining candidate feature point, utilize design of patterns characteristics in artificial target's thing, the round dot center in artificial target's thing can be distinguished from candidate feature point, its specific algorithm step is described below:
1. with candidate feature point one by one as candidate's corner point (round dot of long limit and minor face intersection), and calculate distance and the Slope Parameters that other all candidate feature are put this point, execution in step 2..
2. according to Slope Parameters, by on same the ray that with candidate's corner point is the summit be condition to the grouping of candidate feature point, and the record conllinear candidate feature p that counts out
n〉=3 candidate feature point coordinate array, wherein the criterion of same ray be the angle difference of reacting according to Slope Parameters within 10 degree, execution in step is 3..If do not obtain two or more p
n〉=3 candidate feature point coordinate array thinks that this candidate feature point is not a corner point, then handles next candidate's corner point (returning step 1.) or finishes (when all candidate feature points have all traveled through).
3. verify in every group all unique points to the distance of current candidate's corner point whether near the multiplication proportionate relationship (1: 2: 3: ...), if satisfy the p that counts of proportional relation
n〉=3, then aperture synthesizes the some array of ratio condition, and deletes out-of-proportion point in the array, and the ascending sort of pressing current candidate's corner point distance that meets the demands, writes down the mean distance of consecutive point simultaneously; If satisfy the p that counts of proportional relation
n<3, then delete whole array; After above-mentioned processing, if the some array number that obtains is not less than 2, execution in step 4.; If the some array number that obtains like this less than 2, thinks that this candidate feature point is not a corner point, then handle next candidate's corner point (returning step 1.) or finish (when all candidate feature points have all traveled through).
4. in remaining some array, if the ratio (little scale-up factor removes big scale-up factor) that the mean distance of consecutive point in two some arrays is arranged is between 0.7~1.0, and count in one of them array is 4, counting in another array is 3, and whether checking represents the angle of ray less than 180 degree from the ray of the former representative to the latter.If eligible, then this candidate's corner point is real corner point, and counting is that point in 4 the array is the round dot on the long limit, and counting is that point in 3 the array is the round dot on the minor face, can distinguish with the order of pressing distance on one side; So far, finish the extraction and the ordering of fundamental lattice in the dot matrix pattern, and obtained the image coordinate of round dot, the result as shown in Figure 6; If carry out multiple target tracking, also need execution in step 5. (optional).If ineligible, then handle next candidate's corner point (returning step 1.) or finish (when all candidate feature points have all traveled through).
5. when carrying out multiple target tracking, comprise the mark of a plurality of different patterns in the scene, the extraction of fundamental lattice and identification in finishing the dot matrix pattern just need be expanded the extraction and the ordering of dot matrix.Design feature according to artificial target's thing, point in the expansion dot matrix is always around fundamental lattice, and it is close with the distance of fundamental lattice mid point and flex point with the distance of flex point, dull and stereotyped background is a black, avoided pattern noise on every side, is radius with above-mentioned known extensions dot matrix mid point to the ultimate range of flex point, is the center of circle with the flex point, just can find the extension point lattice point and obtain its image coordinate in candidate feature point; According to expanding counting out of dot matrix, just can discern different artificial target's things then; When counting out of expansion dot matrix is identical, can be according to the different artificial target's thing (specifically becoming standard picture, 7 Feature Mapping) of arrangement mode identification of the point of expanding dot matrix referring to 6 perspective transforms; Thereby realization multiple target tracking.
6) look and be transformed into standard picture
When counting out of expansion dot matrix is identical, can be according to the different artificial target's thing of arrangement mode identification of the point of expanding dot matrix, centre distance at all circles of design equates up and down, so can count information according to the definite some place ranks of expanding dot matrix of the information of fundamental lattice.But because tracking target is moved, the spatial relationship of camera and artificial target's thing changes, so the image of artificial target's thing also changes; Program can't directly be judged the mutual alignment relation of fundamental lattice and expansion dot matrix; Therefore for obtaining the arrangement mode of the point of expanding dot matrix, can be by perspective transform, the unique point image 8 that will comprise fundamental sum expansion dot matrix is transformed to standard picture 9, be equivalent to the image that the optical main axis of camera is taken when vertical with the mark plane, just can determine to expand the arrangement mode of the point of dot matrix this moment.Specific algorithm is as follows:
1. find the solution the perspective running parameter
The mapping function forward of perspective transform can be expressed as:
[u wherein
i, v
i] and [x
i, y
i] (i=0,1,2,3) be shooting image 8 with standard picture 9 in the image coordinate of corresponding point, a
Kh(k=1,2,3, k=1,2,3) are the perspective projection transformation parameters.
After the fundamental lattice of finishing the shooting image is extracted and sorted, can obtain the center image coordinate [u of fundamental lattice each point in the unique point image 8
i, v
i], can also obtain the corresponding relation of the point of the point of fundamental lattice in the unique point image 8 and standard picture 9 fundamental lattices, because the arrangement and the position distribution of fundamental lattice are fixed in the standard picture, i.e. centre coordinate [the x of fundamental lattice mid point in the standard picture
i, y
i] known.Therefrom optional four groups of corresponding point are with its coordinate figure [u
i, v
i] and [x
i, y
i] substitution 3 formulas, and make a
33=1, both can find the solution the perspective projection transformation parameter.
2. perspective transform
After finding the solution the perspective projection transformation parameter, with the each point centre coordinate [u of expansion dot matrix in the unique point image 8
i, v
i] substitution 3 formulas, the coordinate figure [x that just can find the solution the center correspondence of expansion dot matrix in the standard picture 9
i, y
i], the result of conversion is as shown in Figure 7.
7) Feature Mapping
Coordinate figure [the x of the center correspondence of 9 expansion dot matrix in obtaining standard picture
i, y
i] after, by with standard picture 9 in the centre coordinate [x of fundamental lattice mid point
i, y
i] relatively, just the ranks that can obtain the place of each extension point lattice point are counted information, then the ranks at the place of all extension point lattice points being counted information mapping is eigenmatrix 10 (as shown in Figure 7), just can discern a plurality of different signs according to the value of eigenmatrix 10, realizes multiple target tracking and identification.
A kind of real-time reconstruction algorithm of used for multi-vision visual space vector is specially:
According to the image coordinate and the order of 8 pairs of fundamental point lattice points of two width of cloth Image Acquisition, can rebuild 8 space of points coordinates, can obtain the spatial attitude of mark planimetric coordinates in world coordinates simultaneously.Wherein the X-axis of mark plane coordinate system is for being set out by flex point by the directions of rays that fits of other 4 points of long limit, and Y-axis is for to be set out by the directions of rays that fits of 3 points of minor face by flex point, and the coordinate center is a flex point, and the Z axle is determined according to the right-hand rule.
The following formula of reconstruction foundation of point:
Zc
1[u
1,v
1,1]
T=M
1[x,y,z,1]
T (4)
Zc
2[u
2,v
2,1]
T=M
2[x,y,z,1]
T (5)
Wherein (x, y z) are the world coordinates of arbitrfary point, space p, Zc
1Expression point p is at camera coordinates C
1Under the Z axial coordinate, u
1, v
1What represent is the image coordinate of picture point, M
1Expression camera C
1Parameter matrix, be 3 * 4 rank matrixes.Its dependent variable is camera C
2Relevant parameter.At known M
1, M
2, (u
1, v
1) and (u
2, v
2) time, simultaneous (4), (5) two formulas can be tried to achieve the world coordinates of a p.
Because effective following range of two cameras is limited, can adopt the many orders tracking technique and the posture reconstruction of multi-cam array, adopted among the present invention and be the multi-eye stereo visual structure that plane grid distributes, as shown in Figure 8, the optical axis vertical level of each camera, photocentre is evenly distributed on the net point, and dash area is effective following range among the figure.When setting up this used for multi-vision visual system, H is the setting height(from bottom) of camera photocentre, and h is the height of stable tracing area, and generally speaking, the visual angle difference of the both direction of camera is established β>α; For the height h that guarantees the tracing area that all directions are stable equates the spacing d of grid
1, d
2Should satisfy certain constraint; Shown in (3) formula:
When actual installation multi-eye stereo vision system, consider that parameters of pick-up head and direction can not be in full accord, for guaranteeing the height h of stable tracing area, add modifying factor Δ h, can be by the calculating d of following two formulas
1, d
2
d
1=(H-h-Δh)(tg(α/2) (7)
d
2=kd
1 (8)
For the system that two nodes are arranged (forming by two cover cameras and support equipment in the system);
Directly adopted the image coordinate and the order of 8 pairs of fundamental point lattice points in two width of cloth images to rebuild.For the system that a plurality of nodes are arranged (forming N 〉=3 by N cover camera and support equipment in the system);
In the system as shown in Figure 8, traced into by three cameras to I haven't seen you for ages, consider the efficient of reconstruction, adopted two width of cloth images to rebuild in stable tracing area internal labeling.The principle of eliminating is: if the center of the centre of form range image of mark point set farthest, will be eliminated.Because the closer to picture centre, the linear relationship of imaging is good more.After having selected two width of cloth images, the method for reconstruction as mentioned above.
A kind of parallel processing architecting method based on microcomputer for improving cost performance and dirigibility, has adopted the parallel processing architecting method based on N (N is more than or equal to 2) platform microcomputer.Use MPI (Message PassingInterface) technology to define N+1 process: a N process that realizes target tracking algorism among Fig. 9 and carry out image acquisition and feature point extraction, processing; 1 process is carried out three-dimensional reconstruction.
Effective service condition of the present invention is: except that all hardware equipment and the normal operation of software systems, at least simultaneously comprise artificial target's thing in the scene image that has two width of cloth to photograph, otherwise, system will export last effectively result, comprise artificial target's thing in the scene image that has two width of cloth to photograph simultaneously.
The present invention compares with electromagnetic tracking system commonly used at present, has following outstanding feature and remarkable advantage:
Be not subject to the electromagnetic interference (EMI) of surrounding environment, tracking results is stable.General electromagnetic tracking system is wired device, and movable part of the present invention need not transmission line, so use comfortable; General electromagnetic tracking system has certain effective range, and the present invention can adopt a plurality of camera chain, the expansion following range.Structure is flexible simultaneously, can be according to the quantity of tracking target and the hardware and the class of regulating system systems such as following range and real-time requirement, and cost of the present invention is lower, realization is easy, reliable, real-time and stable high.
Description of drawings
Fig. 1 is a coplanar point system of battle formations case
Fig. 2 is the Tracking Recognition algorithm flow
The original image of shooting when Fig. 3 moves for system
Fig. 4 is the image after the threshold segmentation binaryzation
Fig. 5 is the shape restriction image after handling
Fig. 6 is the image of geometrical constraint after handling
Fig. 7 is transparent translation and Feature Mapping
Fig. 8 is the multi-eye stereo visual structure that planar network distributes
Fig. 9 is the relation and the interprocess communication of multi-process
Figure 10 is for containing the total tracker synoptic diagram of six degree of freedom vision of two nodes based on parallel processing structure
Figure 11 is the tracking and the three-dimensional reconstruction of single artificial target's thing
Figure 12 is the tracking and the identification of two artificial marks
Embodiment
A preferred embodiment of the present invention is: referring to Figure 10, this is based on the six degree of freedom visual tracking method of microcomputer parallel processing structure, and the system of employing includes two microcomputers (microcomputer 0 and microcomputer 1), i.e. N; Two cover image acquisition subsystems (comprise camera 0 and 1 and be contained in image pick-up card in the microcomputer 0,1); Two network communication cards; A network switch; The artificial target: coplanar point system of battle formations case as shown in Figure 1.
Two microcomputers:
Three processes of native system operate in respectively on two microcomputers on the same Local Area Network.Process 0,2 operates on the microcomputer 0, and process 1 operates on the microcomputer 1.
Two cover image acquisition subsystems:
Two cameras have been adopted in the image acquisition subsystem in the present embodiment, two image pick-up cards, they are installed in respectively on two microcomputers, two cameras carry out video acquisition from diverse location to the residing scene of artificial target's thing, and convert the vision signal that collects to Digital Image Data via image pick-up card.
The Digital Image Data 11 that obtains, 12 respectively by operating in microcomputer 0, two real-time follow-up algorithm processes 0 and 1 in the microcomputer 1 are handled, through Threshold Segmentation binaryzation (threshold value is 120), seed filling, (figure's ratio is 0.2 in the shape restriction, pixel number scope is 10~60), after finding the solution the processing of geometric center and geometrical constraint, get access to two groups of fundamental lattice two dimensional image coordinate arrays of an artificial mark, Bai Se point is a fundamental lattice as shown in figure 11, according to the present invention, be sent in the process 2 of microcomputer 0 by the network communication card and the network switch with obtaining two dimensional image coordinate array in the microcomputer 1, process 2 is according to the real-time reconstruction algorithm of used for multi-vision visual space vector of the present invention, utilize these two groups of data to reconstruct the spatial attitude 13 of motion mark continuously, processing speed can reach for 25 frame/seconds.
Second embodiment of the present invention is: present embodiment is same as the previously described embodiments basically, institute's difference is, present embodiment includes two artificial marks, the view data that obtains is respectively by operating in microcomputer 0, two real-time follow-up algorithm processes 0 and 1 in the microcomputer 1 are handled, through Threshold Segmentation binaryzation (threshold value is 120), seed filling, (figure's ratio is 0.2 in the shape restriction, pixel number scope is 10~60), after finding the solution the processing of geometric center and geometrical constraint, get access to the four groups of fundamental lattices of two artificial marks and the unique point two dimensional image coordinate array of expansion dot matrix, the original image that comprises two marks 14 that process 0 shown in Figure 12 is handled, extract Figure 15 as a result of fundamental lattice, extract Figure 16 as a result of fundamental lattice and expansion dot matrix; The unique point image 8 of geometrical constraint being handled the back acquisition is converted into standard picture 9 through after the perspective transforms; For standard picture 9, the position distribution according to the expansion dot matrix is translated into eigenmatrix 10, realizes the identification of two artificial marks of difference, and principle as shown in Figure 7.
Be sent to two groups of two dimensional image coordinate arrays obtaining in the microcomputer 1 in the process 2 of microcomputer 0 by the network communication card and the network switch, process 2 is according to the real-time reconstruction algorithm of used for multi-vision visual space vector of the present invention, utilize these four groups of data to reconstruct the spatial attitude of motion mark continuously, processing speed can reach for 21 frame/seconds.
The 3rd embodiment of the present invention is: this six degree of freedom vision track system based on the microcomputer parallel processing structure comprises N platform microcomputer (microcomputer 0 and microcomputer N-1); N overlaps image acquisition subsystem; N network communication card; A network switch; The individual artificial mark of M (M 〉=1): coplanar point system of battle formations case as shown in Figure 2.Wherein the tricks N of equipment is determined by following range, and the number M of artificial target's thing is by follow the tracks of needing and the decision of hardware handles performance, but the quantity of artificial target's thing when whether system's maximum tracking target quantity frame of reference processing speed reaches the system applies minimum requirements.
N platform microcomputer:
According to the present invention, the N+1 of a native system process operates in respectively on the N platform microcomputer on the same Local Area Network.Process 0 operates in microcomputer 0 to microcomputer N-1 to process N-1, and process N and N+1 operate on the microcomputer N.
N overlaps image acquisition subsystem:
Adopted the N camera in the image acquisition subsystem in the native system, N piece image pick-up card, they are installed in microcomputer 0 respectively to microcomputer N, N platform camera carries out video acquisition from diverse location to the residing scene of artificial target's thing, and converts the vision signal that collects to digital of digital video data via image pick-up card; The space structure of N platform camera is by multi-eye stereo visual structure design of the present invention, as shown in Figure 8.
According to the present invention, the video data that obtains is handled by N the real-time follow-up algorithm process 0 and the N that operate among microcomputer 0, the microcomputer N respectively, after Threshold Segmentation binaryzation, seed filling, shape restriction, finding the solution geometric center and geometrical constraint processing, get access to the fundamental lattice of 2 * M group artificial target thing and the unique point two dimensional image coordinate array of expansion dot matrix, the unique point image 8 that obtains after just geometrical constraint is handled is converted into standard picture 9 through after the perspective transform; For standard picture 9, the position distribution according to the expansion dot matrix is translated into eigenmatrix, realizes the identification of artificial target's thing, and principle as shown in Figure 7.
According to the present invention, microcomputer 0 is obtained two dimensional image coordinate array in the microcomputer N to be sent among the process N+1 of microcomputer N by the network communication card and the network switch, process N+1 utilizes these 2 * M group data to reconstruct the spatial attitude of M motion mark continuously according to the real-time reconstruction algorithm of used for multi-vision visual space vector of the present invention.
Claims (3)
- Six degree of freedom visual tracking method based on the microcomputer parallel processing structure, it is characterized in that operation steps is as follows:1) the tracing object thing is given artificial target's thing;2) by N platform camera the scene that has the handmarking is carried out video acquisition, obtain original image;3) original image of Huo Deing is handled in computing machine, and treatment step is:1. realize multiple goal real-time follow-up algorithm: to the original image that contains the handmarking that photographs, after threshold segmentation binaryzation, seed filling, shape restriction, finding the solution geometric center and geometrical constraint processing, get access to the fundamental lattice of artificial target's thing and the unique point image coordinate of expansion dot matrix; Being converted into standard picture through perspective transform then, according to the location distribution information of expansion dot matrix in the standard picture, is eigenmatrix by Feature Mapping, realizes artificial target's identification; Use artificial target's thing of different pattern, realize multiple target tracking;2. used for multi-vision visual space vector real-time reconstruction algorithm: the many orders tracking technique and the posture reconstruction algorithm that adopt the multi-cam array, adopted simultaneously and be the multi-eye stereo visual structure that plane grid distributes, the optical axis vertical level of each camera, photocentre is evenly distributed on the net point; The optimum principle of image is in the reconstruction algorithm: if the two nearest width of cloth images at the center of the centre of form range image of mark point set are with selected;3. based on the parallel processing architecting method of microcomputer: adopted the parallel processing architecting method based on N (N is more than or equal to 2) platform microcomputer, utilization MPI technology has defined N+1 process: a N process that realizes target tracking algorism and has carried out image acquisition and feature point extraction, processing; 1 process is carried out three-dimensional reconstruction (locus and directional information).4) behind the locus and directional information three-dimensional reconstruction of complement mark thing, reconstructed results is outputed in the various man-machine interactive systems by network communication apparatus.
- 2. the six degree of freedom visual tracking method based on the microcomputer parallel processing structure according to claim 1 is characterized in that the characteristics of artificial target's thing are:1) all pattern point face coplane, promptly the round dot in the pattern is distributed on the one flat plate,2) radius of each round dot is identical, and radius size is by the usable range in using and can discern the resolution situation and determine;3) distance up and down in the adjacent round dot center of circle equates, the distance size is by the usable range in using and can discern the resolution situation and determine;4) wherein conllinear are distinguished in the centers of circle of the two groups of points in edge, and these two lines are orthogonal; Wherein one group comprise count greater than another the group contained counting;5) color is a white, and the back of the body end is a black.
- 3. system that is used for the described six degree of freedom visual tracking method based on the microcomputer parallel processing structure of claim 1, comprise N platform microcomputer, N cover image capturing system, a N network communication card, a network switch, it is characterized in that having the artificial target's thing that is installed on the tracing object, the camera of image capturing system is aimed at the scene at artificial target's thing place, camera output is input to microcomputer through image pick-up card, and microcomputer is realized interconnected mutually by the network communication card and the network switch.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 200410017199 CN1674047A (en) | 2004-03-25 | 2004-03-25 | Six freedom visual tracking method and system based on micro machine parallel processing structure |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 200410017199 CN1674047A (en) | 2004-03-25 | 2004-03-25 | Six freedom visual tracking method and system based on micro machine parallel processing structure |
Publications (1)
Publication Number | Publication Date |
---|---|
CN1674047A true CN1674047A (en) | 2005-09-28 |
Family
ID=35046573
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 200410017199 Pending CN1674047A (en) | 2004-03-25 | 2004-03-25 | Six freedom visual tracking method and system based on micro machine parallel processing structure |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN1674047A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101893894A (en) * | 2010-06-30 | 2010-11-24 | 上海交通大学 | Reconfigurable Micro Mobile Robot Swarm Positioning and Tracking System |
CN102156863A (en) * | 2011-05-16 | 2011-08-17 | 天津大学 | Cross-camera tracking method for multiple moving targets |
CN102288106A (en) * | 2010-06-18 | 2011-12-21 | 合肥工业大学 | Large-space visual tracking six-dimensional measurement system and method |
CN101576994B (en) * | 2009-06-22 | 2012-01-25 | 中国农业大学 | Method and device for processing remote sensing image |
CN107693131A (en) * | 2016-08-09 | 2018-02-16 | 株式会社高永科技 | Optical tracking mark, optical tracking system and optical tracking method |
CN109520706A (en) * | 2018-11-21 | 2019-03-26 | 云南师范大学 | Automobile fuse box assembly detection system, image-recognizing method and screw hole positioning mode |
CN109739344A (en) * | 2018-11-20 | 2019-05-10 | 平安科技(深圳)有限公司 | Unlocking method, device, equipment and storage medium based on eyeball moving track |
CN110959099A (en) * | 2017-06-20 | 2020-04-03 | 卡尔蔡司Smt有限责任公司 | System, method and marker for determining the position of a movable object in space |
CN113894799A (en) * | 2021-12-08 | 2022-01-07 | 北京云迹科技有限公司 | Robot and marker identification method and device for assisting environment positioning |
-
2004
- 2004-03-25 CN CN 200410017199 patent/CN1674047A/en active Pending
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101576994B (en) * | 2009-06-22 | 2012-01-25 | 中国农业大学 | Method and device for processing remote sensing image |
CN102288106B (en) * | 2010-06-18 | 2013-03-27 | 合肥工业大学 | Large-space visual tracking six-dimensional measurement system and method |
CN102288106A (en) * | 2010-06-18 | 2011-12-21 | 合肥工业大学 | Large-space visual tracking six-dimensional measurement system and method |
CN101893894B (en) * | 2010-06-30 | 2012-01-04 | 上海交通大学 | Reconfigurable miniature mobile robot cluster locating and tracking system |
CN101893894A (en) * | 2010-06-30 | 2010-11-24 | 上海交通大学 | Reconfigurable Micro Mobile Robot Swarm Positioning and Tracking System |
CN102156863B (en) * | 2011-05-16 | 2012-11-14 | 天津大学 | Cross-camera tracking method for multiple moving targets |
CN102156863A (en) * | 2011-05-16 | 2011-08-17 | 天津大学 | Cross-camera tracking method for multiple moving targets |
CN107693131A (en) * | 2016-08-09 | 2018-02-16 | 株式会社高永科技 | Optical tracking mark, optical tracking system and optical tracking method |
CN107693131B (en) * | 2016-08-09 | 2020-06-30 | 株式会社高迎科技 | Optical tracking marker, optical tracking system, and optical tracking method |
CN110959099A (en) * | 2017-06-20 | 2020-04-03 | 卡尔蔡司Smt有限责任公司 | System, method and marker for determining the position of a movable object in space |
CN109739344A (en) * | 2018-11-20 | 2019-05-10 | 平安科技(深圳)有限公司 | Unlocking method, device, equipment and storage medium based on eyeball moving track |
CN109739344B (en) * | 2018-11-20 | 2021-12-14 | 平安科技(深圳)有限公司 | Unlocking method, device and equipment based on eyeball motion track and storage medium |
CN109520706A (en) * | 2018-11-21 | 2019-03-26 | 云南师范大学 | Automobile fuse box assembly detection system, image-recognizing method and screw hole positioning mode |
CN109520706B (en) * | 2018-11-21 | 2020-10-09 | 云南师范大学 | Screw hole coordinate extraction method of automobile fuse box |
CN113894799A (en) * | 2021-12-08 | 2022-01-07 | 北京云迹科技有限公司 | Robot and marker identification method and device for assisting environment positioning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1324535C (en) | 3D modeling system | |
US8824781B2 (en) | Learning-based pose estimation from depth maps | |
CN101310289B (en) | Capturing and processing facial motion data | |
US8611607B2 (en) | Multiple centroid condensation of probability distribution clouds | |
WO2023273093A1 (en) | Human body three-dimensional model acquisition method and apparatus, intelligent terminal, and storage medium | |
WO2016123913A1 (en) | Data processing method and apparatus | |
JP2009252112A (en) | Image processing apparatus and method | |
CN1743806A (en) | Moving-object height determining apparatus | |
CN107729295A (en) | Real-time drop point pre-judging method, platform and the equipment of a kind of shuttlecock | |
CN1904806A (en) | System and method of contactless position input by hand and eye relation guiding | |
CN108257177B (en) | Positioning system and method based on space identification | |
CN101702233A (en) | Three-dimension locating method based on three-point collineation marker in video frame | |
WO2023093739A1 (en) | Multi-view three-dimensional reconstruction method | |
WO2021115301A1 (en) | Close-range target 3d acquisition apparatus | |
CN1674047A (en) | Six freedom visual tracking method and system based on micro machine parallel processing structure | |
CN107977996A (en) | Space target positioning method based on target calibrating and positioning model | |
CN111680586A (en) | Badminton player motion attitude estimation method and system | |
CN114766042A (en) | Target detection method, device, terminal equipment and medium | |
Li et al. | Visual–tactile fusion for transparent object grasping in complex backgrounds | |
CN116645697A (en) | Multi-view gait recognition method and device, electronic equipment and storage medium | |
CN115393519A (en) | Three-dimensional reconstruction method based on infrared and visible light fusion image | |
CN108363494A (en) | A kind of mouse input system based on virtual reality system | |
CN113729747B (en) | Spherical metal marked cone beam CT metal artifact removal system and removal method | |
KR102075079B1 (en) | Motion tracking apparatus with hybrid cameras and method there | |
JP6868875B1 (en) | Posture estimation learning system, posture estimation learning method, and data creation program for machine learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |