CN105069784B - A kind of twin camera target positioning mutually checking nonparametric technique - Google Patents

A kind of twin camera target positioning mutually checking nonparametric technique Download PDF

Info

Publication number
CN105069784B
CN105069784B CN201510453771.0A CN201510453771A CN105069784B CN 105069784 B CN105069784 B CN 105069784B CN 201510453771 A CN201510453771 A CN 201510453771A CN 105069784 B CN105069784 B CN 105069784B
Authority
CN
China
Prior art keywords
video camera
target
angle point
euclidean distance
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510453771.0A
Other languages
Chinese (zh)
Other versions
CN105069784A (en
Inventor
向桂山
王全强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Morning Ann Polytron Technologies Inc
Original Assignee
Hangzhou Morning Ann Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Morning Ann Polytron Technologies Inc filed Critical Hangzhou Morning Ann Polytron Technologies Inc
Priority to CN201510453771.0A priority Critical patent/CN105069784B/en
Publication of CN105069784A publication Critical patent/CN105069784A/en
Application granted granted Critical
Publication of CN105069784B publication Critical patent/CN105069784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of twin camera target to position mutually checking nonparametric technique, apply in education recorded broadcast, video conference or intelligent monitor system, only need to set four reference points in the common across region of two camera coverages, can be by the target detected and the relation of four reference points, to determine whether the target in two video cameras belongs to the target of same position.The present invention comprises the following steps:The lower right corner, the lower left corner, the upper right corner and the upper left corner in juxtaposition region are set in front and rear video camera respectively;Euclidean distance of the Place object detected before and after calculating respectively in video camera to four reference points;Four Euclidean distance values of the Place object in two video cameras are normalized respectively into scope [0,1];Distance vector of the forming position target in front and rear video camera;The Euclidean distance of distance vector of the calculation position target in front and rear video camera;Given judgment threshold, the target that video camera detects before and after progress whether be same position target judgement.

Description

A kind of twin camera target positioning mutually checking nonparametric technique
Technical field
The present invention relates to a kind of twin camera target to position mutually checking nonparametric technique, applies in education recorded broadcast, video council In view or intelligent monitor system.
Background technology
In the high education recorded broadcast classroom of comparison, meeting room or Indoor Video is required, the wall generally before and after room Respectively one video camera of installation, the interesting target of interior is detected, positioned or monitored jointly.Front and rear two video cameras are all interior Put the intelligent image algorithm of detection target.Real-time, interactive communications are carried out between front and rear video camera, in a manner of collaboration, checking mutually Whether the target of each Autonomous test belongs to same position target, if belonging to same position target, confirmation really detects target, To reach the effect more more accurate and stable than separate unit shooting machine testing.Because in multiple targets of separate unit shooting machine testing, have A little targets are false targets, that is, are disturbed, and only front and rear video camera confirms that same point in space all detects target, could really Surely it is real target.Such as Application No. 201310589445.3, a kind of entitled entity localization method based on dual camera Chinese patent i.e. employ similar method.
In existing mode, to verify that the target that front and rear video camera detects belongs to same position target, it usually needs thing First know following parameter:The inner parameters such as the focal length of video camera, the angle of visual field, distortion system, the setting height(from bottom) of video camera, two are taken the photograph Three-dimensional mutual alignment relation of camera etc., then by complicated space geometry conversion, it could judge that above video camera detects Target and the target that is detected below in video camera whether belong to the target of same position.It is front and rear to take the photograph in actual installation The mutual alignment relation of camera is ever-changing, and it is fairly cumbersome accurately to obtain above parameter, and space geometry mapping algorithm designs Complexity is high, and difficulty is big, it is difficult to accomplishes simple general-purpose.
The content of the invention
It is an object of the invention to overcome above shortcomings in the prior art, and provide a kind of reasonable in design double take the photograph Mutually checking nonparametric technique, the method do not need intrinsic parameters of the camera to the positioning of camera target, it is not required that two shootings of measurement Three-dimensional mutual alignment relation between machine, it is only necessary in the common across region of two camera coverages, four reference points are set, Can be by the target detected and the relation of four reference points, whether the target to determine in two video cameras belongs to same The target of position.
Technical scheme is used by the present invention solves the above problems:
A kind of twin camera target positioning mutually checking nonparametric technique, front and rear respectively one video camera of installation, front and rear indoors Camera photography machine can photograph Place object, it is characterised in that:Comprise the following steps:
Step 1: the lower right corner, the lower-left in two camera field of view juxtaposition regions are set in front and rear video camera respectively Angle, the upper right corner and four, upper left corner reference point;
The coordinate of bottom right angle point of the juxtaposition region in preceding video camera is designated as Pf0, the coordinate of lower-left angle point is designated as Pf1, The coordinate of upper left angle point is designated as Pf2, the coordinate of upper right angle point is designated as Pf3, in this video camera, in the Place object detected The coordinate of the heart is designated as Tf;
The coordinate of upper left angle point of the juxtaposition region in rear video camera is designated as Pb0, the coordinate of upper right angle point is designated as Pb1, The coordinate of bottom right angle point is designated as Pb2, the coordinate of lower-left angle point is designated as Pb3, in this video camera, in the Place object detected The coordinate of the heart is designated as Tb;
Step 2: Euclidean distance of the Place object detected before and after calculating respectively in video camera to four reference points:
(1), preceding video camera Place object to the Euclidean distance of four reference points be respectively Df0、Df1、Df2、Df3
(2), rear video camera Place object to the Euclidean distance of four reference points be respectively Db0、Db1、Db2、Db3
Step 3: four Euclidean distance values of the Place object in above-mentioned two video cameras are normalized respectively to scope [0,1] In:
(1), in preceding video camera, the distance Dnf after normalizationk=Dfk/(Df0+Df1+Df2+Df3), wherein k=0,1,2,3;
(2), in rear video camera, the distance Dnb after normalizationk=Dbk/(Db0+Db1+Db2+Db3), wherein k=0,1,2,3;
Step 4: distance vector of the forming position target in front and rear video camera:
(1), in the distance vector of preceding video camera be:[Dnf0, Dnf1, Dnf2, Dnf3];
(2), in the distance vector of rear video camera be:[Dnb0, Dnb1, Dnb2, Dnb3];
Step 5: the Euclidean distance DT of distance vector of the calculation position target in front and rear video camera:
DT=sqrt((Dnf0-Dnb0)^2+(Dnf1-Dnb1)^2+(Dnf2-Dnb2)^2+(Dnf3-Dnb3)^2);
Step 6: given judgment threshold Tresh, whether the target that video camera detects before and after progress is same position target Judgement:
If DT≤Tresh, front and rear video camera detect for same position target;
If DT>Tresh, then what front and rear video camera detected is not same position target.
The target that the present invention is detected in video camera before and after calculating to four reference points Euclidean distance when, introduce visual angle power Weight coefficient:The visual angle weight coefficient of preceding video camera is Coeff0、Coeff1, the visual angle weight coefficient of rear video camera is Coefb0、 Coefb1
In preceding video camera, the Euclidean distance Df ' after the introducing visual angle weight coefficient of Place object to four angle pointsk=Coeffn ×Dfk, wherein n=0,1;K=0,1,2,3;
Afterwards in video camera, the Euclidean distance Db ' after the introducing visual angle weight coefficient of Place object to four angle pointsk= Coefb n×Dbk, wherein n=0,1;K=0,1,2,3.
Before the present invention in video camera, the Df ' of bottom right angle point is calculated0With the Df ' of lower-left angle point1When weighting weight Coeff0, calculate The Df ' of upper left angle point2With the Df ' of upper right angle point3When weighting weight Coeff1, and meet following relation:Coeff0+Coeff1=2, its Middle Coeff0<1, Coeff1>1;
Afterwards in video camera, the Db ' of upper left angle point is calculated0With the Db ' of upper right angle point1When weighting weight Coefb0, calculate the lower right corner The Df ' of point2With the Df ' of lower-left angle point3When weighting weight Coefb1, and meet following relation:Coefb0+Coefb1=2, wherein Coefb0>1, Coefb1<1。
The effect of visual angle weight coefficient is:Correction caused range error due to visual angle relation.For example target is stood Distance in the middle in classroom, and four angles in classroom should be equal, but in the shooting visual angle of preceding video camera, due to Visual angle relation, shows as on image:Target is bigger than normal from the pixel distance at following two angle, and from both the above angle pixel away from From less than normal.In order to correct range error caused by this visual angle, thus calculate target to following two angle apart from when, using less than 1 weight Coeff0, calculate target to both the above angle apart from when take weight Coeff more than 11, and have complementary relationship up and down, So:Coeff0+Coeff1=2.The principle of video camera is also such afterwards.
Threshold value Tresh of the present invention obtains according to actual test and experiment.Threshold value Tresh determination is according to by two aspects: 1st, during the position difference of video camera installation, in the Euclidean distance of distance vector of the target in front and rear video camera of same position DT;2nd, when the position of video camera installation is identical, diverse location distance vector of the target in front and rear video camera it is European away from From DT.In summary two kinds of situations, take the DT values of maximum, and can leave appropriate surplus, it is determined as threshold value Tresh.
The present invention compared with prior art, has advantages below and effect:1st, the inner parameter of video camera need not be known;2、 The mutual alignment of two video cameras need not be measured;3rd, the installation site limitation to video camera is small;4th, model is simple, algorithm complex Small, amount of calculation is small;5th, algorithm robustness is good, and reliability is high, adapts to varying environment.
Brief description of the drawings
Fig. 1 is the algorithm flow chart of the present invention.
Fig. 2 is the front and rear camera position relation schematic diagram of the present invention.
Fig. 3 is the schematic diagram of the preceding video camera shooting visual angle of the present invention.
Fig. 4 is the schematic diagram of the rear video camera shooting visual angle of the present invention.
Embodiment
Below in conjunction with the accompanying drawings and the present invention is described in further detail by embodiment, and following examples are to this hair Bright explanation and the invention is not limited in following examples.
Referring to Fig. 1~Fig. 4, present invention wall front and rear indoors respectively installs a video camera, and two video cameras are with opposite Direction is installed, and indoor most of region is imaged in two video cameras, and front and rear camera photography machine can photograph position Target.The equal built-in intelligence image algorithm of two video cameras, there is automatic detection Place object and obtain the function of its coordinate.
Embodiment 1:
The present embodiment does not introduce visual angle weight coefficient, and coordinate uses two-dimensional coordinate, and the present embodiment comprises the following steps:
Step 1: the lower right corner, the lower-left in two camera field of view juxtaposition regions are set in front and rear video camera respectively Angle, the upper right corner and four, upper left corner reference point;
The coordinate of bottom right angle point of the juxtaposition region in preceding video camera is designated as Pf (x0, y0), the coordinate note of lower-left angle point For Pf (x1, y1), the coordinate of upper left angle point is designated as Pf (x2, y2), the coordinate of upper right angle point is designated as Pf (x3, y3), in this video camera In, the coordinate at the center of the Place object detected is designated as Tf (x, y);Video camera shooting visual angle was base to above coordinate in the past It is accurate.
The coordinate of upper left angle point of the juxtaposition region in rear video camera is designated as Pb (x0, y0), the coordinate note of upper right angle point For Pb (x1, y1), the coordinate of bottom right angle point is designated as Pb (x2, y2), the coordinate of lower-left angle point is designated as Pb (x3, y3), in this video camera In, the coordinate at the center of the Place object detected is designated as Tb (x, y);Video camera shooting visual angle is base to above coordinate later It is accurate.
Step 2: Euclidean distance of the Place object detected before and after calculating respectively in video camera to four reference points:
(1), in preceding video camera, the Euclidean distance Df of calculation position target to four angle points respectively k
Dfk=sqrt((xk-x)^2+(yk- y) ^2), wherein n=0,1;K=0,1,2,3;
Df0Euclidean distance for Place object to bottom right angle point;Df1Euclidean distance for Place object to lower-left angle point; Df2Euclidean distance for Place object to upper left angle point;Df3Euclidean distance for Place object to upper right angle point;
(2), in rear video camera, the Euclidean distance Db of calculation position target to four angle points respectivelyk
Dbk=sqrt((xk-x)^2+(yk- y) ^2), wherein n=0,1;K=0,1,2,3;
Db0Euclidean distance for Place object to upper left angle point;Db 1Euclidean distance for Place object to upper right angle point; Db2Euclidean distance for Place object to bottom right angle point;Db3Euclidean distance for Place object to lower-left angle point.
Step 3: normalize respectively four Euclidean distance values of the Place object in above-mentioned two video cameras to scope [0, 1] in:
(1), in preceding video camera, the distance Dnf after normalizationk=Dfk/(Df0+Df1+Df2+Df3), wherein k=0,1,2,3;
Dnf0For the distance after Euclidean distance of the Place object to bottom right angle point is normalized;Dnf1For to position mesh The Euclidean distance marked to lower-left angle point be normalized after distance;Dnf2For the Euclidean distance to Place object to upper left angle point Distance after being normalized;Dnf3For the distance after Euclidean distance of the Place object to upper right angle point is normalized;
(2), in rear video camera, the distance Dnb after normalizationk=Dbk/(Db0+Db1+Db2+Db3), wherein k=0,1,2,3;
Dnb0For the distance after Euclidean distance of the Place object to upper left angle point is normalized;Dnb1For to position mesh Mark upper right angle point Euclidean distance be normalized after distance;Dnb2For the Euclidean distance to Place object to bottom right angle point Distance after being normalized;Dnb3For the distance after Euclidean distance of the Place object to lower-left angle point is normalized.
Step 4: distance vector of the forming position target in above-mentioned front and rear video camera:
The distance vector of preceding video camera is:[Dnf0, Dnf1, Dnf2, Dnf3];
The distance vector of video camera is afterwards:[Dnb0, Dnb1, Dnb2, Dnb3]。
Step 5: the Euclidean distance DT of distance vector of the calculation position target in front and rear video camera:
DT=sqrt((Dnf0-Dnb0)^2+(Dnf1-Dnb1)^2+(Dnf2-Dnb2)^2+(Dnf3-Dnb3)^2)。
Step 6: given judgment threshold Tresh, whether the target that video camera detects before and after progress is same position target Judgement:
If DT≤Tresh, front and rear video camera detect for same position target;
If DT>Tresh, then what front and rear video camera detected is not same position target.
Embodiment 2:
The present embodiment introduces visual angle weight coefficient, and coordinate uses two-dimensional coordinate, and the present embodiment comprises the following steps:
Step 1: the lower right corner, the lower-left in two camera field of view juxtaposition regions are set in front and rear video camera respectively Angle, the upper right corner and four, upper left corner reference point;
The coordinate of bottom right angle point of the juxtaposition region in preceding video camera is designated as Pf (x0, y0), the coordinate note of lower-left angle point For Pf (x1, y1), the coordinate of upper left angle point is designated as Pf (x2, y2), the coordinate of upper right angle point is designated as Pf (x3, y3), in this video camera In, the coordinate at the center of the Place object detected is designated as Tf (x, y);Video camera shooting visual angle was base to above coordinate in the past It is accurate.
The coordinate of upper left angle point of the juxtaposition region in rear video camera is designated as Pb (x0, y0), the coordinate note of upper right angle point For Pb (x1, y1), the coordinate of bottom right angle point is designated as Pb (x2, y2), the coordinate of lower-left angle point is designated as Pb (x3, y3), in this video camera In, the coordinate at the center of the Place object detected is designated as Tb (x, y);Video camera shooting visual angle is base to above coordinate later It is accurate.
Step 2: introducing visual angle weight of the Place object detected before and after calculating respectively in video camera to four reference points Euclidean distance after coefficient:
(1), in preceding video camera, the Euclidean distance introduced after the weight coefficient of visual angle of calculation position target to four angle points Df’ k
Df’k=Coeffn×sqrt((xk-x)^2+(yk- y) ^2), wherein n=0,1;K=0,1,2,3;
Df’0For the Euclidean distance after the introducing visual angle weight coefficient of Place object to bottom right angle point;Df’1For Place object Euclidean distance to after the introducing visual angle weight coefficient of lower-left angle point;Df’2Weighed for the introducing visual angle of Place object to upper left angle point Euclidean distance after weight coefficient;Df’3For the Euclidean distance after the introducing visual angle weight coefficient of Place object to upper right angle point;
Wherein Coeff0、Coeff1To calculate the visual angle weight coefficient of Euclidean distance, Df ' is calculated0、Df’1When weighting weight Coeff0, calculate Df '2、Df’3When weighting weight Coeff1, and meet following relation:
Coeff0+Coeff1=2, wherein Coeff0<1, Coeff1>1;
(2), in rear video camera, the Euclidean distance introduced after the weight coefficient of visual angle of calculation position target to four angle points Db’k
Db’k=Coefbn×sqrt((xk-x)^2+(yk- y) ^2), wherein n=0,1;K=0,1,2,3;
Db’0For the Euclidean distance after the introducing visual angle weight coefficient of Place object to upper left angle point;Db’ 1For Place object Euclidean distance to after the introducing visual angle weight coefficient of upper right angle point;Db’2Weighed for the introducing visual angle of Place object to bottom right angle point Euclidean distance after weight coefficient;Db’3For the Euclidean distance after the introducing visual angle weight coefficient of Place object to lower-left angle point;
Wherein Coefb0、Coefb1To calculate the visual angle weight coefficient of Euclidean distance, Db ' is calculated0、Db’1When weighting weight Coefb0, calculate Df '2、Df’3When weighting weight Coefb1, and meet following relation:
Coefb0+Coefb1=2, wherein Coefb0>1, Coefb1<1。
Step 3: four Euclidean distance values of the Place object in above-mentioned two video cameras are normalized respectively to scope [0,1] In:
(1), in preceding video camera, the distance Dnf ' after normalizationk=Df’k/(Df’0+Df’1+Df’2+Df’3), wherein k=0, 1,2,3;
Dnf’0For the distance after Euclidean distance of the Place object to bottom right angle point is normalized;Dnf’1For to position Target be normalized to the Euclidean distance of lower-left angle point after distance;Dnf’2For to Place object to the European of upper left angle point Distance be normalized after distance;Dnf’3For after Euclidean distance of the Place object to upper right angle point is normalized away from From;
(2), in rear video camera, the distance Dnb ' after normalization k=Db’k/(Db’0+Db’1+Db’2+Db’3), wherein k=0, 1,2,3;
Dnb’0For the distance after Euclidean distance of the Place object to upper left angle point is normalized;Dnb’1For to position Target be normalized to the Euclidean distance of upper right angle point after distance;Dnb’2For to Place object to the European of bottom right angle point Distance be normalized after distance;Dnb’3For after Euclidean distance of the Place object to lower-left angle point is normalized away from From.
Step 4: formed before and after video camera distance vector:
The distance vector of preceding video camera is:[Dnf’0, Dnf '1, Dnf '2, Dnf '3];
The distance vector of video camera is afterwards:[Dnb’0, Dnb '1, Dnb '2, Dnb '3]。
Step 5: Euclidean distance DT ' of the calculation position target in the distance vector of front and rear video camera:
DT’=sqrt((Dnf’0-Dnb’0)^2+(Dnf’1-Dnb’1)^2+(Dnf’2-Dnb’2)^2+(Dnf’3-Dnb’3)^ 2)。
Step 6: given judgment threshold Tresh, whether the target that video camera detects before and after progress is same position target Judgement:
If DT '≤Tresh, front and rear video camera detect for same position target;
If DT '>Tresh, then what front and rear video camera detected is not same position target.
In summary, the image algorithm of target locating verification in the twin camera that patent of the present invention is proposed, without knowing The mutual alignment size of the internal reference of video camera and two video cameras, it is only necessary to demarcate four angle points of visual field intersection region, you can root According to the distance of the target location detected in each video camera to four angle points, and then by the distance between distance vector, sentence Whether the target detected by disconnected front and rear video camera belongs to the target of same position, and algorithm is simple, and it is convenient to realize, reliable results.
Furthermore, it is necessary to illustrate, the specific embodiment described in this specification, the shape of its parts and components, it is named Title etc. can be different, and the above content described in this specification is only to structure example explanation of the present invention.

Claims (4)

1. a kind of twin camera target positioning mutually checking nonparametric technique, front and rear respectively one video camera of installation, front and rear to take the photograph indoors Camera video camera can photograph Place object, it is characterised in that:Comprise the following steps:
Step 1: the lower right corner, the lower left corner, the right side in two camera field of view juxtaposition regions are set in front and rear video camera respectively Upper four reference points in angle and the upper left corner;
The coordinate of bottom right angle point of the juxtaposition region in preceding video camera is designated as Pf0, the coordinate of lower-left angle point is designated as Pf1, upper left The coordinate of angle point is designated as Pf2, the coordinate of upper right angle point is designated as Pf3, in this video camera, the center of the Place object detected Coordinate is designated as Tf;
The coordinate of upper left angle point of the juxtaposition region in rear video camera is designated as Pb0, the coordinate of upper right angle point is designated as Pb1, bottom right The coordinate of angle point is designated as Pb2, the coordinate of lower-left angle point is designated as Pb3, in this video camera, the center of the Place object detected Coordinate is designated as Tb;
Step 2: Euclidean distance of the Place object detected before and after calculating respectively in video camera to four reference points:
(1), preceding video camera Place object to the Euclidean distance of four reference points be respectively Df0、Df1、Df2、Df3
(2), rear video camera Place object to the Euclidean distance of four reference points be respectively Db0、Db1、Db2、Db3
Step 3: four Euclidean distance values of the Place object in above-mentioned two video cameras are normalized respectively into scope [0,1]:
(1), in preceding video camera, the distance Dnf after normalizationk=Dfk/(Df0+Df1+Df2+Df3), wherein k=0,1,2,3;
(2), in rear video camera, the distance Dnb after normalizationk=Dbk/(Db0+Db1+Db2+Db3), wherein k=0,1,2,3;
Step 4: distance vector of the forming position target in front and rear video camera:
(1), in the distance vector of preceding video camera be:[Dnf0, Dnf1, Dnf2, Dnf3];
(2), in the distance vector of rear video camera be:[Dnb0, Dnb1, Dnb2, Dnb3];
Step 5: the Euclidean distance DT of distance vector of the calculation position target in front and rear video camera:
DT=sqrt((Dnf0-Dnb0)^2+(Dnf1-Dnb1)^2+(Dnf2-Dnb2)^2+(Dnf3-Dnb3)^2);
Step 6: given judgment threshold Tresh, whether the target that video camera detects before and after progress is sentencing for same position target It is disconnected:
If DT≤Tresh, front and rear video camera detect for same position target;
If DT>Tresh, then what front and rear video camera detected is not same position target.
2. twin camera target positioning mutually checking nonparametric technique according to claim 1, it is characterised in that:Before and after calculating The target detected in video camera to four reference points Euclidean distance when, introduce visual angle weight coefficient:The visual angle of preceding video camera Weight coefficient is Coeff0、Coeff1, the visual angle weight coefficient of rear video camera is Coefb0、Coefb1
In preceding video camera, the Euclidean distance Df ' after the introducing visual angle weight coefficient of Place object to four angle pointsk=Coeffn× Dfk, wherein n=0,1;K=0,1,2,3;
Afterwards in video camera, the Euclidean distance Db ' after the introducing visual angle weight coefficient of Place object to four angle pointsk= Coefb n× Dbk, wherein n=0,1;K=0,1,2,3.
3. twin camera target positioning mutually checking nonparametric technique according to claim 2, it is characterised in that:
In preceding video camera, the Df ' of bottom right angle point is calculated0With the Df ' of lower-left angle point1When weighting weight Coeff0, calculate upper left angle point Df’2With the Df ' of upper right angle point3When weighting weight Coeff1, and meet following relation:Coeff0+Coeff1=2, wherein Coeff0<1, Coeff1>1;
Afterwards in video camera, the Db ' of upper left angle point is calculated0With the Db ' of upper right angle point1When weighting weight Coefb0, calculate bottom right angle point Df’2With the Df ' of lower-left angle point3When weighting weight Coefb1, and meet following relation:Coefb0+Coefb1=2, wherein Coefb0>1, Coefb1<1。
4. twin camera target positioning mutually checking nonparametric technique according to claim 1, it is characterised in that:Described threshold Value Tresh determination is according to by two aspects:1st, during the position difference of video camera installation, taken the photograph in the target of same position front and rear The Euclidean distance DT of distance vector in camera;2nd, video camera installation position it is identical when, diverse location target front and rear The Euclidean distance DT of distance vector in video camera;In summary two kinds of situations, take the DT values of maximum, and are stayed according to actual conditions There is appropriate surplus, it is determined as threshold value Tresh.
CN201510453771.0A 2015-07-29 2015-07-29 A kind of twin camera target positioning mutually checking nonparametric technique Active CN105069784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510453771.0A CN105069784B (en) 2015-07-29 2015-07-29 A kind of twin camera target positioning mutually checking nonparametric technique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510453771.0A CN105069784B (en) 2015-07-29 2015-07-29 A kind of twin camera target positioning mutually checking nonparametric technique

Publications (2)

Publication Number Publication Date
CN105069784A CN105069784A (en) 2015-11-18
CN105069784B true CN105069784B (en) 2018-01-05

Family

ID=54499141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510453771.0A Active CN105069784B (en) 2015-07-29 2015-07-29 A kind of twin camera target positioning mutually checking nonparametric technique

Country Status (1)

Country Link
CN (1) CN105069784B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780606A (en) * 2016-12-31 2017-05-31 深圳市虚拟现实技术有限公司 Four mesh camera positioners and method
CN109754426B (en) * 2017-11-01 2021-04-23 虹软科技股份有限公司 Method, system and device for verifying camera calibration parameters
JP6892134B2 (en) * 2019-01-25 2021-06-18 学校法人福岡工業大学 Measurement system, measurement method and measurement program
CN111104867B (en) * 2019-11-25 2023-08-25 北京迈格威科技有限公司 Recognition model training and vehicle re-recognition method and device based on part segmentation
CN111982061B (en) * 2020-07-15 2022-08-23 杭州晨安科技股份有限公司 Distance measurement method based on different focal lengths of binocular fixed-focus cameras
CN115862144B (en) * 2022-12-23 2023-06-23 杭州晨安科技股份有限公司 Gesture recognition method for camera

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4069932B2 (en) * 2005-05-23 2008-04-02 オムロン株式会社 Human detection device and human detection method
CN101299270A (en) * 2008-05-27 2008-11-05 东南大学 Multiple video cameras synchronous quick calibration method in three-dimensional scanning system
CN101566465A (en) * 2009-05-18 2009-10-28 西安交通大学 Method for measuring object deformation in real time
CN101931793A (en) * 2009-06-18 2010-12-29 霍尼韦尔国际公司 To observing the system and method that limited video surveillance fields carries out addressing
CN101969548A (en) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 Active video acquiring method and device based on binocular camera shooting
CN102831385A (en) * 2011-06-13 2012-12-19 索尼公司 Device and method for target identification in multiple-camera monitoring network
CN103473758A (en) * 2013-05-13 2013-12-25 中国科学院苏州生物医学工程技术研究所 Secondary calibration method of binocular stereo vision system
CN103557834A (en) * 2013-11-20 2014-02-05 无锡儒安科技有限公司 Dual-camera-based solid positioning method
CN103777643A (en) * 2012-10-23 2014-05-07 北京网动网络科技股份有限公司 Automatic camera tracking system based on image positioning and tracking method
CN104021538A (en) * 2013-02-28 2014-09-03 株式会社理光 Object positioning method and device
CN104217439A (en) * 2014-09-26 2014-12-17 南京工程学院 Indoor visual positioning system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120069192A1 (en) * 2009-10-20 2012-03-22 Qing-Hu Li Data Processing System and Method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4069932B2 (en) * 2005-05-23 2008-04-02 オムロン株式会社 Human detection device and human detection method
CN101299270A (en) * 2008-05-27 2008-11-05 东南大学 Multiple video cameras synchronous quick calibration method in three-dimensional scanning system
CN101566465A (en) * 2009-05-18 2009-10-28 西安交通大学 Method for measuring object deformation in real time
CN101931793A (en) * 2009-06-18 2010-12-29 霍尼韦尔国际公司 To observing the system and method that limited video surveillance fields carries out addressing
CN101969548A (en) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 Active video acquiring method and device based on binocular camera shooting
CN102831385A (en) * 2011-06-13 2012-12-19 索尼公司 Device and method for target identification in multiple-camera monitoring network
CN103777643A (en) * 2012-10-23 2014-05-07 北京网动网络科技股份有限公司 Automatic camera tracking system based on image positioning and tracking method
CN104021538A (en) * 2013-02-28 2014-09-03 株式会社理光 Object positioning method and device
CN103473758A (en) * 2013-05-13 2013-12-25 中国科学院苏州生物医学工程技术研究所 Secondary calibration method of binocular stereo vision system
CN103557834A (en) * 2013-11-20 2014-02-05 无锡儒安科技有限公司 Dual-camera-based solid positioning method
CN104217439A (en) * 2014-09-26 2014-12-17 南京工程学院 Indoor visual positioning system and method

Also Published As

Publication number Publication date
CN105069784A (en) 2015-11-18

Similar Documents

Publication Publication Date Title
CN105069784B (en) A kind of twin camera target positioning mutually checking nonparametric technique
US11704833B2 (en) Monocular vision tracking method, apparatus and non-transitory computer-readable storage medium
CN103837869B (en) Based on single line laser radar and the CCD camera scaling method of vector relations
CN105072414B (en) A kind of target detection and tracking and system
CN104851104B (en) Using the flexible big view calibration method of target high speed camera close shot
CN103491339B (en) Video acquiring method, equipment and system
CN103033132B (en) Plane survey method and device based on monocular vision
CN110287519A (en) A kind of the building engineering construction progress monitoring method and system of integrated BIM
CN105157592B (en) The deformed shape of the deformable wing of flexible trailing edge and the measuring method of speed based on binocular vision
CN103700140B (en) Spatial modeling method used for linkage of single gun camera and multiple dome cameras
CN104075688A (en) Distance measurement method of binocular stereoscopic gazing monitoring system
CN103200358B (en) Coordinate transformation method between video camera and target scene and device
CN109215082A (en) A kind of camera parameter scaling method, device, equipment and system
CN103425626B (en) Coordinate transformation method and device between a kind of video camera
CN111242991B (en) Method for quickly registering visible light and infrared camera
CN103226838A (en) Real-time spatial positioning method for mobile monitoring target in geographical scene
CN103795935B (en) A kind of camera shooting type multi-target orientation method and device based on image rectification
CN105243637A (en) Panorama image stitching method based on three-dimensional laser point cloud
CN104835141B (en) The mobile terminal and method of three-dimensional model are established in a kind of laser ranging
CN109272532A (en) Model pose calculation method based on binocular vision
CN103646394A (en) Mixed visual system calibration method based on Kinect camera
CN109166153A (en) Tower crane high altitude operation 3-D positioning method and positioning system based on binocular vision
CN106033614B (en) A kind of mobile camera motion object detection method under strong parallax
CN110595433A (en) Binocular vision-based transmission tower inclination measurement method
CN109191533B (en) Tower crane high-altitude construction method based on fabricated building

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Hangzhou City, Zhejiang Province, Xihu District three, 310030 West Lake science and Technology Park West Road No. 16 building 4 layer 4

Applicant after: Hangzhou morning Ann Polytron Technologies Inc

Address before: Hangzhou City, Zhejiang Province, Xihu District three, 310030 West Lake science and Technology Park West Road No. 16 building 4 layer 4

Applicant before: Hangzhou Chingan Vision Digital Technology Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant