CN105335977A - Image pickup system and positioning method of target object - Google Patents

Image pickup system and positioning method of target object Download PDF

Info

Publication number
CN105335977A
CN105335977A CN201510711384.2A CN201510711384A CN105335977A CN 105335977 A CN105335977 A CN 105335977A CN 201510711384 A CN201510711384 A CN 201510711384A CN 105335977 A CN105335977 A CN 105335977A
Authority
CN
China
Prior art keywords
image
target
video camera
subimage
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510711384.2A
Other languages
Chinese (zh)
Other versions
CN105335977B (en
Inventor
黑光月
袁肇飞
曾庆彬
邹文艺
晋兆龙
陈卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Keda Technology Co Ltd
Original Assignee
Suzhou Keda Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Keda Technology Co Ltd filed Critical Suzhou Keda Technology Co Ltd
Priority to CN201510711384.2A priority Critical patent/CN105335977B/en
Publication of CN105335977A publication Critical patent/CN105335977A/en
Application granted granted Critical
Publication of CN105335977B publication Critical patent/CN105335977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a positioning method of a target object. The positioning method of the target object is used for an image pickup system. The image pickup system includes a first camera used for acquiring a first image and a second camera used for acquiring a second image. The positioning method includes the following steps that: a. the first image and the second image which contain the target object are acquired; b. a first target image is acquired according to the first image, wherein the first target image contains the target object; c. an initial homography matrix is calculated according to calibration information of the first camera and the second camera and the state information of the second camera; d. the second image is mapped into the first image according to the initial homography matrix, a second target image can be acquired; e. optical flow matching is performed on the first target image and the second target image, and optical flow information is calculated; f. the homography matrix is corrected according to the optical flow information; and g. the first target image is mapped into the second image according to the corrected homography matrix, so that a corrected second target image is acquired, and therefore, the target object can be positioned in the second image.

Description

The localization method of camera system and destination object
Technical field
The present invention relates to Computer Applied Technology field, particularly relate to the localization method of a kind of camera system and destination object.
Background technology
Pan/Tilt/Zoom camera (Pan-Tilt-Zoom), be called for short ball-shaped camera, it is integrated with clouds terrace system and camera chain.Camera chain can carry out the stretching in the visual field, and The Cloud Terrace can make camera chain horizontally rotate and vertical rotation.Therefore, Pan/Tilt/Zoom camera can realize following the tracks of the target in monitoring scene and amplifying, and plays an important role in supervisory system.
In rifle ball linked system, wide-angle gun shaped video camera carries out background modeling to guarded region, detect moving target, then control ball-shaped camera to follow the tracks of, control here comprises the speed etc. controlled when P (horizontally rotating), the T (tilt rotation) of ball-shaped camera, Zoom (convergent-divergent) and ball-shaped camera rotate.The large visual field of wide-angle gun shaped video camera is so first utilized to carry out target detection to large scene, recycling P, T, Zoom of ball-shaped camera and the velocity of rotation of ball-shaped camera are followed the tracks of and convergent-divergent target, reach and both monitor the large visual field, do not omit again the object of Small object details.
The image of ball-shaped camera is widescreen, and destination object not adapts with the shape of the image of ball-shaped camera, such as some destination object is tall and thin pedestrian, if view picture ball-shaped camera image is captured, export to attributive analysis module, both store invalid informations a large amount of around target, affect again attributive analysis result.If only preserve the center section of ball machine image, be also irrational.In the application of actual rifle ball linked system, ball machine moves under gunlock controls, and follow the tracks of target, meanwhile target is also ceaselessly being moved.Therefore, target may appear at the diverse location of ball-shaped camera image.
Summary of the invention
The defect that the present invention exists to overcome above-mentioned prior art, provides the localization method of a kind of camera system and destination object, and it can effective location destination object fast, and acquisition can be used for the target image of image procossing.
The invention provides a kind of localization method of destination object, for camera chain, described camera chain comprises: the first video camera, and for obtaining the first image, described first image is the wide angle picture of a scene ken; And second video camera, for obtaining the second image, described second image is the partial enlarged drawing of the described scene ken; Described localization method comprises: a. obtains described first image and the second image that comprise destination object; B. according to described first Image Acquisition first object image, described first object image comprises described destination object; C. according to described first video camera and the targeted message of described second video camera and the status information of described second video camera, initial homography matrix is calculated; D. according to described initial homography matrix, by described second image mapped in described first image, the second target image is obtained; E. light stream coupling is carried out to described first object image and described second target image, calculate Optic flow information; F. calculate according to described Optic flow information and revise homography matrix; G. according to described correction homography matrix, by described first object image mapped in described second image, correction second target image is obtained, to locate described destination object in described second image.
Preferably, described step b comprises: in described first image, centered by the center of described destination object, intercepts rectangular target image as described first object image.
Preferably, the rectangular target image intercepted is the square target image of 96*96 pixel.
Preferably, described initial homography matrix is the homography matrix that described second image is transformed into described first image.
Preferably, described step c comprises: c1. obtains described targeted message, and the pixel coordinate that described targeted message comprises the first image of described first video camera is transformed into the second homography matrix of the physical coordinates of described second video camera; C2. the physical coordinates of described second video camera of the pixel coordinate of the second image of corresponding described second video camera is calculated according to the pixel coordinate of the second image of described second video camera and the status information of described second video camera; C3. the pixel coordinate of the first image of described first video camera of the pixel coordinate of the second image corresponding to described second video camera is calculated according to the inverse matrix of described second homography matrix and the physical coordinates of described second video camera; And c4. calculates described initial homography matrix according to the pixel coordinate of the second image of the pixel coordinate of the first image of described first video camera and described second video camera.
Preferably, described step c2 comprises: choose the pixel coordinate of at least four not pixels of conllinear as the pixel coordinate of described second video camera according to described second image.
Preferably, described step e comprises: e1. calculates the gaussian pyramid of described first object image and described second target image; E2. the gradient information of the gaussian pyramid of described second target image is successively calculated; E3. successively light stream coupling is carried out to described first object image and described second target image according to described gradient information, calculate Optic flow information.
Preferably, described step e1 comprises: utilize gaussian kernel to carry out convolution operation to described first object image and described second target image; According to described first object image and described second target image, foundation is highly the gaussian pyramid of 3, be designated as first object image collection A and the second target image set B respectively, wherein, described first object image collection A comprises the ground floor first object subimage A that size reduces gradually 1, second layer first object subimage A 2, third layer first object subimage A 3; Described second target image set B comprises the ground floor second target subimage B that size reduces gradually 1, the second layer second target subimage B 2, third layer second target subimage B 3.
Preferably, described gaussian kernel is [1/161/43/81/41/16] x [1/161/43/81/41/16] t.
Preferably, described ground floor first object subimage A 1and described ground floor second target subimage B 1for the image of 96*96 pixel; Described second layer first object subimage A 2and the described second layer second target subimage B 2for the image of 48*48 pixel; Described third layer first object subimage A 3and described third layer second target subimage B 3for the image of 24*24 pixel.
Preferably, described step e2 comprises:
The gradient information of described second target image set B is successively calculated according to following formula:
▿ B i = grad x grad y T = ∂ B i ∂ x ∂ B i ∂ y T ,
Wherein, represent described i-th layer of second target subimage B igradient information, grad xrepresent described i-th layer of second target subimage B iat the gradient information of X-direction, grad yrepresent described i-th layer of second target subimage B igradient information in the Y direction, i gets 3,2,1 successively.
Preferably, described step e3 comprises:
Optic flow information according to following formulae discovery
d x d y = Σ N g x x Σ N g x y Σ N g x y Σ N g y y - 1 err x err y ,
Wherein, d xrepresent the side-play amount in X-direction of described i-th layer of first object subimage Ai and described i-th layer of second target subimage Bi, d yrepresent the side-play amount in X-direction of described i-th layer of first object subimage Ai and described i-th layer of second target subimage Bi, d x d y Represent the Optic flow information of described i-th layer of first object subimage Ai and described i-th layer of second target subimage Bi,
Σ ngxx, Σ ngyy, Σ ngxy, err xand err yrespectively according to following formulae discovery:
Σ Ngxx=Σ Ngradx*gradx;
Σ Ngyy=Σ Ngrady*grady;
Σ Ngxy=Σ Ngradx*grady;
err x=Σ NDiff*gradx;
err y=Σ NDiff*grady;
Wherein, the neighborhood of N representation feature point P, unique point P chooses in each layer of described first object image collection A, and Diff represents the gray scale difference value of pixel in the N of field.
Preferably, described field N be centered by unique point P, the odd number of pixels point square area that is the length of side.
Preferably, the Optic flow information of i-th layer of first object subimage Ai and described i-th layer of second target subimage Bi calculates according to the Optic flow information of the i-th+1 layer first object subimage Ai+1 and described the i-th+1 layer second target subimage Bi+1.
Preferably, described step f comprises: in described first object image, choose N number of pixel as the first pixel; N number of pixel corresponding with described first pixel is respectively chosen as the second pixel in described second target image; Utilize the pixel coordinate of N number of described first pixel of described Optic flow information correction, obtain N number of correction first pixel; The pixel coordinate of described correction first pixel and described second pixel is utilized to calculate described correction homography matrix.
Preferably, described first object image has first object frame, described first object frame is the boundary rectangle of described first object image, described step g comprises: according to described correction homography matrix, described first object frame is mapped in described second image, using the boundary rectangle of the mapping objects frame of acquisition as the second target frame, using the image in described second target frame as described correction second target image.
Preferably, described correction second target image is used for image recognition and the graphical analysis of described destination object.
According to another aspect of the invention, also provide a kind of camera system, comprising: the first video camera, for obtaining the first image, described first image is the wide angle picture of a scene ken; Second video camera, for obtaining the second image, described second image is the partial enlarged drawing of the described scene ken; And locating device, utilize above-mentioned localization method, according to described first image control, the second video camera to locate described destination object in described second image.
Preferably, described first video camera is gun shaped video camera, and described second video camera is ball-shaped camera.
Compared to existing technology, the present invention obtains wide angle picture and partial enlargement image by two kinds of video cameras, map and light stream coupling according to wide angle picture and partial enlargement image, calculate the skew between image, and then the target image comprising destination object in wide angle picture is mapped in partial enlargement image, locate the destination object in partial enlargement image.The target image comprising destination object in partial enlargement image is only used for the image procossing for destination object by the present invention.The target image that destination object localization method of the present invention provides contains destination object exactly, and it can not comprise information invalid in a large number to increase time and the load of image procossing.
Accompanying drawing explanation
Describe its example embodiment in detail by referring to accompanying drawing, above-mentioned and further feature of the present invention and advantage will become more obvious.
Fig. 1 shows the schematic diagram of the camera system according to the embodiment of the present invention.
Fig. 2 shows the process flow diagram of the localization method of the destination object according to the embodiment of the present invention.
Fig. 3 shows the first image according to the embodiment of the present invention.
Fig. 4 shows the second image according to the embodiment of the present invention.
Fig. 5 shows the first object image according to the embodiment of the present invention.
Fig. 6 shows the second target image according to the embodiment of the present invention.
Fig. 7 shows correction second target image according to the embodiment of the present invention.
Embodiment
More fully example embodiment is described referring now to accompanying drawing.But example embodiment can be implemented in a variety of forms, and should not be understood to be limited to embodiment set forth herein; On the contrary, these embodiments are provided to make the present invention comprehensively with complete, and the design of example embodiment will be conveyed to those skilled in the art all sidedly.Reference numeral identical in the drawings represents same or similar structure, thus will omit the repeated description to them.
The localization method of camera system provided by the invention and destination object is described below in conjunction with Fig. 1 to Fig. 7.
Camera system 100 is preferably twin camera linked system, and it comprises the first video camera 110, second video camera 120 and locating device 130.First video camera 110 is for obtaining the first image 200 of a scene ken wide angle picture.Preferably, the first video camera 110 is the gun shaped video camera for taking wide angle picture.Second video camera 120 is for obtaining the second image 300.Second image 300 is partial enlarged drawings of the scene ken of the first image 200.Second video camera 120 preferably, is ball-shaped camera.Locating device 130, by localization method provided by the invention, controls the second video camera 120 with localizing objects object 900 in the second image 300 according to the first image 200.In one embodiment, locating device 130 can integrate with the first video camera 110.In another embodiment, locating device 130 can integrate with the second video camera 120.In yet another embodiment, locating device 130 is independently devices, and by wired or wireless mode and the first video camera 110 and the second video camera 120 connecting communication.In other embodiments, locating device 130 can also in a distributed manner respectively from the first video camera 110 and the second video camera 120 integrated to perform different steps at the first video camera 110 and the second video camera 120.
The process flow diagram that the localization method of destination object provided by the invention is shown in Figure 2.This localization method comprises the steps:
S210: obtain described first image 200 and the second image 300 comprising destination object.By the first video camera 110 and the second video camera 120, the shooting to the Same Scene comprising destination object 900 obtains this step.Second image 300 is the partial enlarged drawing of the first image 200.
S220: obtain first object image 210 according to described first image 200, first object image 210 comprises destination object 900.
Specifically, first object image 210 is centered by target's center, the rectangular target image intercepted in the first image 200.This rectangular target image is preferably a foursquare target image.Such as, first object image 210 can be the square target image of 96*96 pixel.
In a specific embodiment, first object image 210 has first object frame 220.First object image 210 is positioned at first object frame 220.In this specific embodiment, locating device 130 also obtains positional information and the size information of first object frame 220.Positional information can be the pixel coordinate of first object frame central point, also can be the pixel coordinate on each summit of first object frame.Size information can be provide in units of pixel.
S230: according to the first video camera 110 and the targeted message of the second video camera 120 and the status information of the second video camera 120, calculate initial homography matrix.
Initial homography matrix is the homography matrix that the second image 300 is transformed into the first image 200.Specifically initial homography matrix is according to such as under type calculating.
First, the targeted message of the first video camera 110 and the second video camera 120 is obtained.The pixel coordinate that this targeted message comprises the first image 200 of the first video camera 110 is transformed into the second homography matrix of the physical coordinates of the second video camera 120.Second homography matrix can be obtained by existing mode, such as, can be that calibrating method in the invention " a kind of for the automatic marking method in video monitoring system " of CN103198487A obtains the second homography matrix according to the patent No..This second homography matrix is preferably the matrix of 3*3, the pixel coordinate of the first image 200 of the first video camera 110 of any point can be converted to the physical coordinates of the second video camera 120.
Then, the physical coordinates (level and vertical deflection) of described second video camera 120 of the pixel coordinate of the second image 300 of corresponding second video camera 120 is calculated according to the pixel coordinate of the second image 300 of the second video camera 120 and the status information of the second video camera 120.Specifically, the second image 300 is chosen the pixel coordinate of N number of point.N to be at least chosen on the second image 300 not conllinear four pixels.
Then, the pixel coordinate of the first image 200 of the first video camera 110 of the pixel coordinate of the second image 300 corresponding to the second video camera 120 is calculated according to the inverse matrix of the second homography matrix and the physical coordinates of the second video camera 120.Finally, initial homography matrix is calculated according to the pixel coordinate of the pixel coordinate of the first image 200 of the first video camera 110 and the second image 300 of the second video camera 120.
Specifically, in a specific embodiment, in order to improve computational accuracy and counting yield, the second image 300 being chosen 5 pixels, being respectively the central pixel point of the second image 300 and the pixel near four summits.These 5 pixels are denoted as p respectively i(i=1-5), its pixel coordinate is denoted as (X respectively di, Y di).Physical coordinates (P current according to the inner parameter of the second video camera 120, the second video camera 120 again c, T c) (P cfor horizontal deflection coordinate, T cfor vertical deflection coordinate) and the current focal length value of the second video camera 120, calculate the physical coordinates of the second video camera 120 of these 5 pixels corresponding in the second current image 300 of the second video camera 120.Then according to the pixel coordinate of the first image 200 to the inverse matrix of the second homography matrix of the physical coordinates system of the second video camera 120, calculate the pixel coordinate (X of 5 pixels of the first image 200 of first video camera 110 corresponding with the physical coordinates of the second video camera 120 bi, Y bi).Then according to the pixel coordinate (X of 5 points on the second image 300 of the second video camera 120 di, Y di) and the first video camera 120 the first image 200 on the pixel coordinate (X of corresponding 5 points bi, Y bi), calculate the initial homography matrix that the second image 300 is transformed into the first image 200.
S240: according to initial homography matrix, is mapped to the second image 300 in the first image 200, obtains the second target image 310.
Specifically, namely according to initial homography matrix, the second image 300 is mapped on the yardstick of the first image 200, as shown in Figure 6.Wherein, when the second image 300 existence does not have the part of raw data, this sentences gray-scale value 0 and fills.When the second image 300 maps, image interpolation can be utilized.For making the image effect after interpolation better, the preferred cubic curve interpolation of the present invention.
S250: carry out light stream coupling to described first object image and described second target image, calculates Optic flow information.
Specifically, for solving the light stream coupling of Large travel range, therefore gaussian pyramid is adopted to process first object image 210 and the second target image 310.Gaussian pyramid is an image collection, and in set, each image stems from same original image, through the down-sampled acquisition of Gauss of this image.Preferably, in the present embodiment, [1/161/43/81/41/16] x [1/161/43/81/41/16] is utilized tgaussian kernel convolution operation is carried out, wherein T representing matrix transposition to first object image 210 and the second target image 310.According to first object image 210 and the second target image 310, foundation is highly the gaussian pyramid of 3, is designated as first object image collection A and first object image collection B respectively.First object image collection A comprises the ground floor first object subimage A that size reduces gradually 1, second layer first object subimage A 2, third layer first object subimage A 3.Second target image set B comprises the ground floor second target subimage B that size reduces gradually 1, the second layer second target subimage B 2, third layer second target subimage B 3.In a specific embodiment, ground floor first object subimage A 1and ground floor second target subimage B 1for the image of 96*96 pixel.Second layer first object subimage A 2and the second layer second target subimage B 2for the image of 48*48 pixel.Third layer first object subimage A 3and third layer second target subimage B 3for the image of 24*24 pixel.
After setting up the gaussian pyramid of first object image 210 and the second target image 310, successively calculate the gradient information of the gaussian pyramid of the second target image.Specifically, the gradient information of the second target image set B is successively calculated according to following formula:
▿ B i = grad x grad y T = ∂ B i ∂ x ∂ B i ∂ y T ,
Wherein, represent i-th layer of second target subimage B igradient information, grad xrepresent i-th layer of second target subimage B iat the gradient information of X-direction, grad yrepresent described i-th layer of second target subimage B igradient information in the Y direction, i gets 3,2,1, T representing matrix transposition successively.The method such as central difference, Sharr compute gradient information grad can be used in certain embodiments xand grad y.The present invention preferably, uses Sharr method to calculate compute gradient information grad xand grad y.
Then, successively light stream coupling is carried out to first object subimage and the second target subimage according to gradient information, calculate Optic flow information.
The principle that light stream is mated is represented with following formula:
Z d=err,
Wherein, Z represents gradient matrix, and d represents side-play amount, and err represents difference.Gradient matrix Z, side-play amount d and difference e rr are respectively:
Z = Σ N g x x Σ N g x y Σ N g x y Σ N g y y ;
d = d x d y ;
e r r = err x err y ,
Wherein, d xrepresent the side-play amount in X-direction of described i-th layer of first object subimage Ai and described i-th layer of second target subimage Bi, d yrepresent the side-play amount in X-direction of described i-th layer of first object subimage Ai and described i-th layer of second target subimage Bi, d x d y Represent the Optic flow information of described i-th layer of first object subimage Ai and described i-th layer of second target subimage Bi.
Σ ngxx, Σ ngyy, Σ ngxy, err xand err yrespectively according to following formulae discovery:
Σ Ngxx=Σ Ngradx*gradx;
Σ Ngyy=Σ Ngrady*grady;
Σ Ngxy=Σ Ngradx*grady;
err x=Σ NDiff*gradx;
err y=Σ NDiff*grady,
Wherein, the neighborhood of N representation feature point P, unique point P chooses in each layer of first object image collection A, and Diff represents the gray scale difference value of pixel in the N of field.Field N be centered by unique point P, the odd number of pixels point square area that is the length of side.Preferably, field N is the square area of 15*15 pixel.
By gradient matrix Z = Σ N g x x Σ N g x y Σ N g x y Σ N g y y , Side-play amount d = d x d y And difference e r r = err x err y Bring formula Z into d=err obtains:
Σ N g x x Σ N g x y Σ N g x y Σ N g y y d x d y = err x err y
Correspondingly, Optic flow information is:
d x d y = Σ N g x x Σ N g x y Σ N g x y Σ N g y y - 1 err x err y .
To simplify further:
d x d y = 1 det ( Z ) Σ N g y y - Σ N g x y - Σ N g x y Σ N g x x - 1 err x err y ,
Wherein, det (Z) represents the value of the determinant of gradient matrix Z.
Correspondingly, the Optic flow information of X-direction and the Optic flow information of Y-direction are according to following formulae discovery:
d x = ( Σ N g y y * err x - Σ N g x y * err y ) / det ( Z )
d y = ( Σ N g x x * err y - Σ N g x y * err x ) / det ( Z )
Specifically, Newton-Raphson process of iteration can be utilized in this step to calculate the exact solution of this unique point P, obtain the center point P of first object image 210 coptic flow information, be denoted as [d x, d y].The Optic flow information computing method of what above formula described is each tomographic image concrete, in pyramid layer, when calculating the Optic flow information of certain one deck, need the result of the Optic flow information on upper strata, as the light stream initial estimate of lower floor, wherein the light stream initial estimate of the superiors' image is 0.The most precalculated is the Optic flow information of Gauss's gold tower the superiors image, and the Output rusults of this layer, as the input of lower one deck.Recursive operation between present use denotational description two adjacent layers.Assuming that adjacent two layers are L and L+1 respectively, and the Optic flow information of L+1 layer has calculated, and is d l+1, then the light stream initial estimate g of L layer is calculated from L+1 tomographic image lexpression formula be:
g L=2(g L+1+d L+1)
Wherein, assuming that algorithm does not have believable light stream initial estimate top, that is:
g L m = [ 0 + 0 ] T
According to above-mentioned formula, when L layer layer calculates light stream vector, do not start search coupling at the characteristic point position coordinate of this layer of target image, but at the unique point pixel coordinate translation g of this layer of target image lplace starts search coupling, and calculate residual error minimum position, the light stream vector that so every one deck searches is all thin tail sheep.
Same method can calculate the displacement vector d of L-1 layer l+1, this process is performed until image bottom L=1, and till namely arriving original image, now image and displacement vector are all original resolution.Then the light stream displacement vector of the bottom is:
d=g 1+d 1
Also can represent with the light stream vector of every one deck:
d = Σ L = 1 L m 2 L d L
Operation like this, ensures in the Optic flow information computation process of the every one deck of Gauss's gold tower, and the displacement of unique point P is all thin tail sheep.
S260: calculate according to described Optic flow information and revise homography matrix.Wherein, revise homography matrix to be used for first object image 210 to be mapped in the second image 300.
Specifically, this step calculates correction homography matrix in the following way: first, chooses N number of pixel as the first pixel in first object image 210.N number of pixel corresponding with the first pixel is respectively chosen as the second pixel in the second target image 310.Utilize the pixel coordinate of N number of first pixel of Optic flow information correction, obtain N number of correction first pixel.The pixel coordinate of correction first pixel and the second pixel is utilized to calculate described correction homography matrix.
In a specific embodiment, N preferably, gets 5.Then this step first, chooses 5 pixels as the first pixel p in first object image 210 bi.5 corresponding with the first pixel respectively pixels are chosen as the second pixel p in the second target image 310 di.The pixel coordinate of 5 the first pixels is deducted step S250 and calculate the Optic flow information [d obtained x, d y], obtain 5 and revise the first pixel p bi' pixel coordinate.Utilize correction first pixel p bi' and the second pixel p dipixel coordinate calculate revise homography matrix.
S270: according to correction homography matrix, first object image 210 is mapped in the second image 300, obtains correction second target image, with localizing objects object 900 in the second image 300.
Specifically, first object image 210 has first object frame 220.First object frame 220 is the boundary rectangle of first object image 210.This step also comprises: according to correction homography matrix, be mapped to by first object frame 220 in second image 300, using the boundary rectangle of the mapping objects frame of acquisition as the second target frame 320.Using the image in the second target frame 320 as correction second target image.At some specific embodiments, mapping objects frame is not rectangle, therefore, preferably, using the boundary rectangle of mapping objects frame as the second target frame 320.Revise the second target image in second target frame 320 and can be used for the follow-up image recognition of destination object 900 and graphical analysis.
The present invention by carrying out the position operating in accurate localizing objects object 900 in the second image 300 and the size of gaussian pyramid and light stream coupling to the first image and the second image, and reduces the invalid information in final correction second target image obtained.
Compared to existing technology, the present invention obtains wide angle picture and partial enlargement image by two kinds of video cameras, map and light stream coupling according to wide angle picture and partial enlargement image, calculate the skew between image, and then the target image comprising destination object in wide angle picture is mapped in partial enlargement image, locate the destination object in partial enlargement image.The target image comprising destination object in partial enlargement image is only used for the image procossing for destination object by the present invention.The target image that destination object localization method of the present invention provides contains destination object exactly, and it can not comprise information invalid in a large number to increase time and the load of image procossing.
Below illustrative embodiments of the present invention is illustrate and described particularly.Should be appreciated that, the invention is not restricted to disclosed embodiment, on the contrary, the invention is intended to contain the various amendment and equivalent replacement that comprise within the scope of the appended claims.

Claims (19)

1. a localization method for destination object, for camera chain, described camera chain comprises:
First video camera, for obtaining the first image, described first image is the wide angle picture of a scene ken; And
Second video camera, for obtaining the second image, described second image is the partial enlarged drawing of the described scene ken;
Described localization method comprises:
A. described first image and the second image that comprise destination object is obtained;
B. according to described first Image Acquisition first object image, described first object image comprises described destination object;
C. according to described first video camera and the targeted message of described second video camera and the status information of described second video camera, initial homography matrix is calculated;
D. according to described initial homography matrix, by described second image mapped in described first image, the second target image is obtained;
E. light stream coupling is carried out to described first object image and described second target image, calculate Optic flow information;
F. calculate according to described Optic flow information and revise homography matrix;
G. according to described correction homography matrix, by described first object image mapped in described second image, correction second target image is obtained, to locate described destination object in described second image.
2. localization method as claimed in claim 1, it is characterized in that, described step b comprises:
In described first image, centered by the center of described destination object, intercept rectangular target image as described first object image.
3. localization method as claimed in claim 2, it is characterized in that, the rectangular target image intercepted is the square target image of 96*96 pixel.
4. localization method as claimed in claim 1, it is characterized in that, described initial homography matrix is the homography matrix that described second image is transformed into described first image.
5. localization method as claimed in claim 4, it is characterized in that, described step c comprises:
C1. obtain described targeted message, the pixel coordinate that described targeted message comprises the first image of described first video camera is transformed into the second homography matrix of the physical coordinates of described second video camera;
C2. the physical coordinates of described second video camera of the pixel coordinate of the second image of corresponding described second video camera is calculated according to the pixel coordinate of the second image of described second video camera and the status information of described second video camera;
C3. the pixel coordinate of the first image of described first video camera of the pixel coordinate of the second image corresponding to described second video camera is calculated according to the inverse matrix of described second homography matrix and the physical coordinates of described second video camera; And
C4. described initial homography matrix is calculated according to the pixel coordinate of the pixel coordinate of the first image of described first video camera and the second image of described second video camera.
6. localization method as claimed in claim 5, it is characterized in that, described step c2 comprises:
The pixel coordinate of at least four not pixels of conllinear is chosen as the pixel coordinate of described second video camera according to described second image.
7. localization method as claimed in claim 1, it is characterized in that, described step e comprises:
E1. the gaussian pyramid of described first object image and described second target image is calculated;
E2. the gradient information of the gaussian pyramid of described second target image is successively calculated;
E3. successively light stream coupling is carried out to described first object image and described second target image according to described gradient information, calculate Optic flow information.
8. localization method as claimed in claim 7, it is characterized in that, described step e1 comprises:
Gaussian kernel is utilized to carry out convolution operation to described first object image and described second target image;
According to described first object image and described second target image, foundation is highly the gaussian pyramid of 3, is designated as first object image collection A and the second target image set B respectively, wherein,
Described first object image collection A comprises the ground floor first object subimage A that size reduces gradually 1, second layer first object subimage A 2, third layer first object subimage A 3;
Described second target image set B comprises the ground floor second target subimage B that size reduces gradually 1, the second layer second target subimage B 2, third layer second target subimage B 3.
9. localization method as claimed in claim 8, it is characterized in that, described gaussian kernel is [1/161/43/81/41/16] x [1/161/43/81/41/16] t.
10. localization method as claimed in claim 8, is characterized in that,
Described ground floor first object subimage A 1and described ground floor second target subimage B 1for the image of 96*96 pixel;
Described second layer first object subimage A 2and the described second layer second target subimage B 2for the image of 48*48 pixel;
Described third layer first object subimage A 3and described third layer second target subimage B 3for the image of 24*24 pixel.
11. localization methods as claimed in claim 8, it is characterized in that, described step e2 comprises:
The gradient information of described second target image set B is successively calculated according to following formula:
▿ B i = grad x grad y T = ∂ B i ∂ x ∂ B i ∂ y T ,
Wherein, represent the gradient information of described i-th layer of second target image Bi, grad xrepresent described i-th layer of second target subimage B iat the gradient information of X-direction, grad yrepresent described i-th layer of second target subimage B igradient information in the Y direction, i gets 3,2,1 successively.
12. localization methods as claimed in claim 11, it is characterized in that, described step e3 comprises:
Optic flow information according to following formulae discovery
d x d y = Σ N g x x Σ N g x y Σ N g x y Σ N g y y - 1 e r r x err y ,
Wherein, d xrepresent the side-play amount in X-direction of described i-th layer of first object subimage Ai and described i-th layer of second target subimage Bi, d yrepresent the side-play amount in X-direction of described i-th layer of first object subimage Ai and described i-th layer of second target subimage Bi, d x d y Represent the Optic flow information of described i-th layer of first object subimage Ai and described i-th layer of second target subimage Bi,
Σ ngxx, Σ ngyy, Σ ngxy, err xand err yrespectively according to following formulae discovery:
Σ Ngxx=Σ Ngradx*gradx;
Σ Ngyy=Σ Ngrady*grady;
Σ Ngxy=Σ Ngradx*grady;
err x=Σ NDiff*gradx;
err y=Σ NDiff*grady;
Wherein, the neighborhood of N representation feature point P, unique point P chooses in each layer of described first object image collection A, and Diff represents the gray scale difference value of pixel in the N of field.
13. localization methods as claimed in claim 12, is characterized in that, described field N be centered by unique point P, the odd number of pixels point square area that is the length of side.
14. localization methods as claimed in claim 12, it is characterized in that, the Optic flow information of i-th layer of first object subimage Ai and described i-th layer of second target image Bi calculates according to the Optic flow information of the i-th+1 layer first object subimage Ai+1 and described the i-th+1 layer second target subimage Bi+1.
15. localization methods as claimed in claim 1, it is characterized in that, described step f comprises:
N number of pixel is chosen as the first pixel in described first object image;
N number of pixel corresponding with described first pixel is respectively chosen as the second pixel in described second target image;
Utilize the pixel coordinate of N number of described first pixel of described Optic flow information correction, obtain N number of correction first pixel;
The pixel coordinate of described correction first pixel and described second pixel is utilized to calculate described correction homography matrix.
16. localization methods as claimed in claim 1, it is characterized in that, described first object image has first object frame, and described first object frame is the boundary rectangle of described first object image, and described step g comprises:
According to described correction homography matrix, described first object frame is mapped in described second image, using the boundary rectangle of the mapping objects frame of acquisition as the second target frame, using the image in described second target frame as described correction second target image.
17. localization methods as described in any one of claim 1 to 16, is characterized in that, described correction second target image is used for image recognition and the graphical analysis of described destination object.
18. 1 kinds of camera systems, is characterized in that, comprising:
First video camera, for obtaining the first image, described first image is the wide angle picture of a scene ken;
Second video camera, for obtaining the second image, described second image is the partial enlarged drawing of the described scene ken; And
Locating device, utilizes the localization method as described in any one of claim 1 to 17, and according to described first image control, the second video camera to locate described destination object in described second image.
19. camera systems as claimed in claim 18, it is characterized in that, described first video camera is gun shaped video camera, and described second video camera is ball-shaped camera.
CN201510711384.2A 2015-10-28 2015-10-28 The localization method of camera system and target object Active CN105335977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510711384.2A CN105335977B (en) 2015-10-28 2015-10-28 The localization method of camera system and target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510711384.2A CN105335977B (en) 2015-10-28 2015-10-28 The localization method of camera system and target object

Publications (2)

Publication Number Publication Date
CN105335977A true CN105335977A (en) 2016-02-17
CN105335977B CN105335977B (en) 2018-05-25

Family

ID=55286482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510711384.2A Active CN105335977B (en) 2015-10-28 2015-10-28 The localization method of camera system and target object

Country Status (1)

Country Link
CN (1) CN105335977B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107650122A (en) * 2017-07-31 2018-02-02 宁夏巨能机器人股份有限公司 A kind of robot hand alignment system and its localization method based on 3D visual identitys
CN109242769A (en) * 2018-12-13 2019-01-18 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN109506658A (en) * 2018-12-26 2019-03-22 广州市申迪计算机系统有限公司 Robot autonomous localization method and system
WO2020182176A1 (en) * 2019-03-13 2020-09-17 华为技术有限公司 Method and apparatus for controlling linkage between ball camera and gun camera, and medium
CN111800605A (en) * 2020-06-15 2020-10-20 深圳英飞拓科技股份有限公司 Gun-ball linkage based vehicle shape and license plate transmission method, system and equipment
CN111800604A (en) * 2020-06-12 2020-10-20 深圳英飞拓科技股份有限公司 Method and device for detecting human shape and human face data based on gun and ball linkage

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0714081A1 (en) * 1994-11-22 1996-05-29 Sensormatic Electronics Corporation Video surveillance system
US6215519B1 (en) * 1998-03-04 2001-04-10 The Trustees Of Columbia University In The City Of New York Combined wide angle and narrow angle imaging system and method for surveillance and monitoring
US20100141767A1 (en) * 2008-12-10 2010-06-10 Honeywell International Inc. Semi-Automatic Relative Calibration Method for Master Slave Camera Control
CN102148965A (en) * 2011-05-09 2011-08-10 上海芯启电子科技有限公司 Video monitoring system for multi-target tracking close-up shooting
CN102932598A (en) * 2012-11-06 2013-02-13 苏州科达科技股份有限公司 Method for intelligently tracking image on screen by camera
CN103024350A (en) * 2012-11-13 2013-04-03 清华大学 Master-slave tracking method for binocular PTZ (Pan-Tilt-Zoom) visual system and system applying same
CN103105858A (en) * 2012-12-29 2013-05-15 上海安维尔信息科技有限公司 Method capable of amplifying and tracking goal in master-slave mode between fixed camera and pan tilt zoom camera
CN103198487A (en) * 2013-04-15 2013-07-10 厦门博聪信息技术有限公司 Automatic calibration method for video monitoring system
CN104574425A (en) * 2015-02-03 2015-04-29 中国人民解放军国防科学技术大学 Calibration and linkage method for primary camera system and secondary camera system on basis of rotary model

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0714081A1 (en) * 1994-11-22 1996-05-29 Sensormatic Electronics Corporation Video surveillance system
US6215519B1 (en) * 1998-03-04 2001-04-10 The Trustees Of Columbia University In The City Of New York Combined wide angle and narrow angle imaging system and method for surveillance and monitoring
US20100141767A1 (en) * 2008-12-10 2010-06-10 Honeywell International Inc. Semi-Automatic Relative Calibration Method for Master Slave Camera Control
CN102148965A (en) * 2011-05-09 2011-08-10 上海芯启电子科技有限公司 Video monitoring system for multi-target tracking close-up shooting
CN102932598A (en) * 2012-11-06 2013-02-13 苏州科达科技股份有限公司 Method for intelligently tracking image on screen by camera
CN103024350A (en) * 2012-11-13 2013-04-03 清华大学 Master-slave tracking method for binocular PTZ (Pan-Tilt-Zoom) visual system and system applying same
CN103105858A (en) * 2012-12-29 2013-05-15 上海安维尔信息科技有限公司 Method capable of amplifying and tracking goal in master-slave mode between fixed camera and pan tilt zoom camera
CN103198487A (en) * 2013-04-15 2013-07-10 厦门博聪信息技术有限公司 Automatic calibration method for video monitoring system
CN104574425A (en) * 2015-02-03 2015-04-29 中国人民解放军国防科学技术大学 Calibration and linkage method for primary camera system and secondary camera system on basis of rotary model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHUNG-HAO CHEN 等: "Heterogeneous Fusion of Omnidirectional and PTZ Cameras for Multiple Object Tracking", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 *
石皓 等: "一种用于鱼眼PTZ主从监控系统的标定方法", 《系统仿真学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107650122A (en) * 2017-07-31 2018-02-02 宁夏巨能机器人股份有限公司 A kind of robot hand alignment system and its localization method based on 3D visual identitys
CN109242769A (en) * 2018-12-13 2019-01-18 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN109242769B (en) * 2018-12-13 2019-03-19 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN109506658A (en) * 2018-12-26 2019-03-22 广州市申迪计算机系统有限公司 Robot autonomous localization method and system
CN109506658B (en) * 2018-12-26 2021-06-08 广州市申迪计算机系统有限公司 Robot autonomous positioning method and system
WO2020182176A1 (en) * 2019-03-13 2020-09-17 华为技术有限公司 Method and apparatus for controlling linkage between ball camera and gun camera, and medium
CN111698455A (en) * 2019-03-13 2020-09-22 华为技术有限公司 Method, device and medium for controlling linkage of ball machine and gun machine
CN111698455B (en) * 2019-03-13 2022-03-11 华为技术有限公司 Method, device and medium for controlling linkage of ball machine and gun machine
CN111800604A (en) * 2020-06-12 2020-10-20 深圳英飞拓科技股份有限公司 Method and device for detecting human shape and human face data based on gun and ball linkage
CN111800605A (en) * 2020-06-15 2020-10-20 深圳英飞拓科技股份有限公司 Gun-ball linkage based vehicle shape and license plate transmission method, system and equipment

Also Published As

Publication number Publication date
CN105335977B (en) 2018-05-25

Similar Documents

Publication Publication Date Title
CN111462135B (en) Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation
CN111462200B (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN109509230B (en) SLAM method applied to multi-lens combined panoramic camera
US11900627B2 (en) Image annotation
CN108242079B (en) VSLAM method based on multi-feature visual odometer and graph optimization model
Ji et al. Panoramic SLAM from a multiple fisheye camera rig
CN105335977A (en) Image pickup system and positioning method of target object
WO2020097840A1 (en) Systems and methods for correcting a high-definition map based on detection of obstructing objects
CN103065323B (en) Subsection space aligning method based on homography transformational matrix
CN106878687A (en) A kind of vehicle environment identifying system and omni-directional visual module based on multisensor
CN110033411B (en) High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle
CN112667837A (en) Automatic image data labeling method and device
CN111207762B (en) Map generation method and device, computer equipment and storage medium
JPWO2020090428A1 (en) Feature detection device, feature detection method and feature detection program
CN110298884A (en) A kind of position and orientation estimation method suitable for monocular vision camera in dynamic environment
CN110146099A (en) A kind of synchronous superposition method based on deep learning
CN111968177A (en) Mobile robot positioning method based on fixed camera vision
CN108648194A (en) Based on the segmentation of CAD model Three-dimensional target recognition and pose measuring method and device
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
Budvytis et al. Large scale joint semantic re-localisation and scene understanding via globally unique instance coordinate regression
CN111998862A (en) Dense binocular SLAM method based on BNN
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN109472778B (en) Appearance detection method for towering structure based on unmanned aerial vehicle
Saleem et al. Neural network-based recent research developments in SLAM for autonomous ground vehicles: A review
CN114663473A (en) Personnel target positioning and tracking method and system based on multi-view information fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant