CN105069784A - Double-camera target positioning mutual authentication nonparametric method - Google Patents
Double-camera target positioning mutual authentication nonparametric method Download PDFInfo
- Publication number
- CN105069784A CN105069784A CN201510453771.0A CN201510453771A CN105069784A CN 105069784 A CN105069784 A CN 105069784A CN 201510453771 A CN201510453771 A CN 201510453771A CN 105069784 A CN105069784 A CN 105069784A
- Authority
- CN
- China
- Prior art keywords
- video camera
- angle point
- coordinate
- distance
- place object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The present invention relates to a double-camera target positioning mutual authentication nonparametric method applied to an education broadcasting, video conference or intelligent monitoring system. As long as the common intersecting area of two camera visual fields are provided with four reference points, according to the relation between detected objects and the four reference points, whether the targets in the two cameras are targets at the same position is determined. The method comprises the following steps of arranging the lower right corner, lower left corner, upper right corner and upper left corner of the intersecting overlapped area in front and back cameras respectively, calculating the Euclidean distances between the detected position targets in the front and back cameras and the four reference points, respectively normalizing four Euclidean distance values of the position targets of the two cameras into a range [0,1], forming the distance vectors of the position targets in the front and back cameras, calculating the Euclidean distances of the distance vectors of the position targets in the front and back cameras, and giving a judgment threshold and judging whether the targets detected by the two cameras are targets at the same position.
Description
Technical field
The present invention relates to a kind of twin camera target localization and verify nonparametric technique mutually, be applied in education recorded broadcast, video conference or intelligent monitor system.
Background technology
Requiring, in higher education recorded broadcast classroom, meeting room or Indoor Video, usually at the front and back wall in room, a video camera to be respectively installed, jointly to detect the interesting target of indoor, locate or monitor.The intelligent image algorithm of two all built-in detection targets of video camera in front and back.Real-time, interactive communications is carried out between the video camera of front and back, in collaborative mode, whether each self-monitoring target of checking belongs to same position target, if belong to same position target mutually, then confirm really target to be detected, detect more accurate and stable effect to reach than separate unit video camera.Because in multiple targets that separate unit video camera detects, some target is false target, namely disturbs, same point in space all detects target to only have front and back video camera to confirm, could determine it is real target.As application number be 201310589445.3, name is called that namely a kind of Chinese patent of the entity localization method based on dual camera have employed similar method.
In existing mode, verify that the target that front and back video camera detects belongs to same position target, following parameter known in advance by usual needs: the inner parameter such as focal length, field angle, distortion system of video camera, the setting height(from bottom) of video camera, the three-dimensional mutual alignment relation etc. of two video cameras, then through the conversion of complicated space geometry, could judge whether the target that video camera above detects and the target detected in video camera belong to the target of same position below.In actual installation, the mutual alignment relation of front and back video camera is ever-changing, the above parameter of Obtaining Accurate quite to bother, and space geometry mapping algorithm design complexities is high, and difficulty is large, is difficult to accomplish simple general-purpose.
Summary of the invention
The object of the invention is to overcome above shortcomings in prior art, and provide a kind of twin camera target localization reasonable in design to verify nonparametric technique mutually, the method does not need intrinsic parameters of the camera, the three-dimensional mutual alignment relation between measurement two video cameras is not needed yet, only need to arrange four reference point in the common intersection region of two camera coverages, namely by the relation of the target that detects and four reference point, determine whether the target in two video cameras belongs to the target of same position.
The present invention's adopted technical scheme that solves the problem is:
A kind of twin camera target localization verifies nonparametric technique mutually, and before and after indoor, respectively install a video camera, front and back camera photography machine all can photograph Place object, it is characterized in that: comprise the steps:
Step one, the lower right corner that two camera field of view juxtaposition regions are set respectively in the video camera of front and back, the lower left corner, the upper right corner and four, upper left corner reference point;
The coordinate of the bottom right angle point of juxtaposition region in front video camera is designated as Pf
0, the coordinate of lower-left angle point is designated as Pf
1, the coordinate of upper left angle point is designated as Pf
2, the coordinate of upper right angle point is designated as Pf
3, in this video camera, the coordinate at the center of the Place object detected is designated as Tf;
The coordinate of the upper left angle point of juxtaposition region in rear video camera is designated as Pb
0, the coordinate of upper right angle point is designated as Pb
1, the coordinate of bottom right angle point is designated as Pb
2, the coordinate of lower-left angle point is designated as Pb
3, in this video camera, the coordinate at the center of the Place object detected is designated as Tb;
The Place object detected in video camera before and after step 2, respectively calculating is to the Euclidean distance of four reference point:
(1), the Place object of front video camera is respectively Df to the Euclidean distance of four reference point
0, Df
1, Df
2, Df
3;
(2), the Place object of rear video camera is respectively Db to the Euclidean distance of four reference point
0, Db
1, Db
2, Db
3;
Step 3, respectively normalization Place object in four Euclidean distance values of above-mentioned two video cameras in scope [0,1]:
(1), in front video camera, the distance Dnf after normalization
k=Df
k/ (Df
0+ Df
1+ Df
2+ Df
3), wherein k=0,1,2,3;
(2), in rear video camera, the distance Dnb after normalization
k=Db
k/ (Db
0+ Db
1+ Db
2+ Db
3), wherein k=0,1,2,3;
Step 4, the distance vector of forming position target in the video camera of front and back:
(1), at the distance vector of front video camera be: [Dnf
0, Dnf
1, Dnf
2, Dnf
3];
(2), at the distance vector of rear video camera be: [Dnb
0, Dnb
1, Dnb
2, Dnb
3];
The Euclidean distance DT of step 5, the distance vector of calculating Place object in the video camera of front and back:
DT=sqrt((Dnf
0-Dnb
0)^2+(Dnf
1-Dnb
1)^2+(Dnf
2-Dnb
2)^2+(Dnf
3-Dnb
3)^2);
Step 6, given judgment threshold Tresh, carry out the judgement whether target that front and back video camera detects is same position target:
If DT≤Tresh, then what before and after, video camera detected is same position target;
If DT>Tresh, then what before and after, video camera detected is not same position target.
When the present invention calculates the target that detects in the video camera of front and back to the Euclidean distance of four reference point, introduce visual angle weight coefficient: the visual angle weight coefficient of front video camera is Coeff
0, Coeff
1, the visual angle weight coefficient of rear video camera is Coefb
0, Coefb
1;
In front video camera, the Euclidean distance Df ' after Place object to the introducing visual angle weight coefficient of four angle points
k=Coeff
n× Df
k, wherein n=0,1; K=0,1,2,3;
In rear video camera, the Euclidean distance Db ' after Place object to the introducing visual angle weight coefficient of four angle points
k=Coefb
n× Db
k, wherein n=0,1; K=0,1,2,3.
Before the present invention in video camera, calculate the Df ' of bottom right angle point
0with the Df ' of lower-left angle point
1time weighting heavy Coeff
0, calculate the Df ' of upper left angle point
2with the Df ' of upper right angle point
3time weighting heavy Coeff
1, and meet following relation: Coeff
0+ Coeff
1=2, wherein Coeff
0<1, Coeff
1>1;
In rear video camera, calculate the Db ' of upper left angle point
0with the Db ' of upper right angle point
1time weighting heavy Coefb
0, calculate the Df ' of bottom right angle point
2with the Df ' of lower-left angle point
3time weighting heavy Coefb
1, and meet following relation: Coefb
0+ Coefb
1=2, wherein Coefb
0>1, Coefb
1<1.
The effect of visual angle weight coefficient is: correct the distance error produced due to visual angle relation.Such as Target Station stands in the middle in classroom, should be equal with the distance at four angles, classroom, but in the shooting visual angle of front video camera, due to visual angle relation, image shows as: the pixel distance at target two angles from is below bigger than normal, and the pixel distance at two angles is less than normal from above.In order to correct the distance error that this visual angle causes, so when calculating target to the distance at two angles below, the weight Coeff that use is less than 1
0, calculate target and get to during the distance at two angles above the weight Coeff being greater than 1
1, and have complementary relationship up and down, and so: Coeff
0+ Coeff
1=2.The principle of rear video camera is also like this.
Threshold value Tresh of the present invention obtains according to reality test and experiment.The definition base of threshold value Tresh is by two aspects: when 1, the position of video camera installation is different, at the Euclidean distance DT of the distance vector of target in the video camera of front and back of same position; 2, when the position of video camera installation is identical, at the Euclidean distance DT of the distance vector of target in the video camera of front and back of diverse location.Comprehensive above two kinds of situations, get maximum DT value, and can leave suitable surplus, be decided to be threshold value Tresh.
The present invention compared with prior art, has the following advantages and effect: 1, without the need to knowing the inner parameter of video camera; 2, without the need to measuring the mutual alignment of two video cameras; 3, little to the installation site restriction of video camera; 4, model is simple, and algorithm complex is little, and calculated amount is little; 5, Algorithm robustness is good, and reliability is high, adapts to varying environment.
Accompanying drawing explanation
Fig. 1 is algorithm flow chart of the present invention.
Fig. 2 is front and back of the present invention camera position relation schematic diagram.
Fig. 3 is the schematic diagram at front video camera shooting visual angle of the present invention.
Fig. 4 is the schematic diagram at rear video camera shooting visual angle of the present invention.
Embodiment
Below in conjunction with accompanying drawing, also by embodiment, the present invention is described in further detail, and following examples are explanation of the invention and the present invention is not limited to following examples.
See Fig. 1 ~ Fig. 4, the present invention respectively installs a video camera at indoor front and back wall, and two video cameras are installed with contrary direction, and indoor most of region equal imaging in two video cameras, front and back camera photography machine all can photograph Place object.Two equal built-in intelligence image algorithms of video camera, have and automatically detect Place object and the function obtaining its coordinate.
Embodiment 1:
The present embodiment does not introduce visual angle weight coefficient, and coordinate adopts two-dimensional coordinate, and the present embodiment comprises the steps:
Step one, the lower right corner that two camera field of view juxtaposition regions are set respectively in the video camera of front and back, the lower left corner, the upper right corner and four, upper left corner reference point;
The coordinate of the bottom right angle point of juxtaposition region in front video camera is designated as Pf (x
0, y
0), the coordinate of lower-left angle point is designated as Pf (x
1, y
1), the coordinate of upper left angle point is designated as Pf (x
2, y
2), the coordinate of upper right angle point is designated as Pf (x
3, y
3), in this video camera, the coordinate at the center of the Place object detected is designated as Tf (x, y); Above coordinate video camera shooting all in the past visual angle is benchmark.
The coordinate of the upper left angle point of juxtaposition region in rear video camera is designated as Pb (x
0, y
0), the coordinate of upper right angle point is designated as Pb (x
1, y
1), the coordinate of bottom right angle point is designated as Pb (x
2, y
2), the coordinate of lower-left angle point is designated as Pb (x
3, y
3), in this video camera, the coordinate at the center of the Place object detected is designated as Tb (x, y); Above coordinate video camera shooting all later visual angle is benchmark.
The Place object detected in video camera before and after step 2, respectively calculating is to the Euclidean distance of four reference point:
(1), in front video camera, the Euclidean distance Df of Place object to four angle points is calculated respectively
k:
Df
k=sqrt ((x
k-x) ^2+ (y
k-y) ^2), wherein n=0,1; K=0,1,2,3;
Df
0for Place object is to the Euclidean distance of bottom right angle point; Df
1for Place object is to the Euclidean distance of lower-left angle point; Df
2for Place object is to the Euclidean distance of upper left angle point; Df
3for Place object is to the Euclidean distance of upper right angle point;
(2), in rear video camera, the Euclidean distance Db of Place object to four angle points is calculated respectively
k:
Db
k=sqrt ((x
k-x) ^2+ (y
k-y) ^2), wherein n=0,1; K=0,1,2,3;
Db
0for Place object is to the Euclidean distance of upper left angle point; Db
1for Place object is to the Euclidean distance of upper right angle point; Db
2for Place object is to the Euclidean distance of bottom right angle point; Db
3for Place object is to the Euclidean distance of lower-left angle point.
Step 3, respectively four the Euclidean distance values of normalization Place object in above-mentioned two video cameras are in scope [0,1]:
(1), in front video camera, the distance Dnf after normalization
k=Df
k/ (Df
0+ Df
1+ Df
2+ Df
3), wherein k=0,1,2,3;
Dnf
0for the distance after being normalized to the Euclidean distance of bottom right angle point Place object; Dnf
1for the distance after being normalized to the Euclidean distance of lower-left angle point Place object; Dnf
2for the distance after being normalized to the Euclidean distance of upper left angle point Place object; Dnf
3for the distance after being normalized to the Euclidean distance of upper right angle point Place object;
(2), in rear video camera, the distance Dnb after normalization
k=Db
k/ (Db
0+ Db
1+ Db
2+ Db
3), wherein k=0,1,2,3;
Dnb
0for the distance after being normalized to the Euclidean distance of upper left angle point Place object; Dnb
1for the distance after being normalized to the Euclidean distance of upper right angle point Place object; Dnb
2for the distance after being normalized to the Euclidean distance of bottom right angle point Place object; Dnb
3for the distance after being normalized to the Euclidean distance of lower-left angle point Place object.
Step 4, the forming position target distance vector in the video camera of above-mentioned front and back:
The distance vector of front video camera is: [Dnf
0, Dnf
1, Dnf
2, Dnf
3];
The distance vector of rear video camera is: [Dnb
0, Dnb
1, Dnb
2, Dnb
3].
The Euclidean distance DT of step 5, the distance vector of calculating Place object in the video camera of front and back:
DT=sqrt((Dnf
0-Dnb
0)^2+(Dnf
1-Dnb
1)^2+(Dnf
2-Dnb
2)^2+(Dnf
3-Dnb
3)^2)。
Step 6, given judgment threshold Tresh, carry out the judgement whether target that front and back video camera detects is same position target:
If DT≤Tresh, then what before and after, video camera detected is same position target;
If DT>Tresh, then what before and after, video camera detected is not same position target.
Embodiment 2:
The present embodiment introduces visual angle weight coefficient, and coordinate adopts two-dimensional coordinate, and the present embodiment comprises the steps:
Step one, the lower right corner that two camera field of view juxtaposition regions are set respectively in the video camera of front and back, the lower left corner, the upper right corner and four, upper left corner reference point;
The coordinate of the bottom right angle point of juxtaposition region in front video camera is designated as Pf (x
0, y
0), the coordinate of lower-left angle point is designated as Pf (x
1, y
1), the coordinate of upper left angle point is designated as Pf (x
2, y
2), the coordinate of upper right angle point is designated as Pf (x
3, y
3), in this video camera, the coordinate at the center of the Place object detected is designated as Tf (x, y); Above coordinate video camera shooting all in the past visual angle is benchmark.
The coordinate of the upper left angle point of juxtaposition region in rear video camera is designated as Pb (x
0, y
0), the coordinate of upper right angle point is designated as Pb (x
1, y
1), the coordinate of bottom right angle point is designated as Pb (x
2, y
2), the coordinate of lower-left angle point is designated as Pb (x
3, y
3), in this video camera, the coordinate at the center of the Place object detected is designated as Tb (x, y); Above coordinate video camera shooting all later visual angle is benchmark.
The Euclidean distance of Place object after the introducing visual angle weight coefficient of four reference point detected in video camera before and after step 2, respectively calculating:
(1), in front video camera, the Euclidean distance Df ' after Place object to the introducing visual angle weight coefficient of four angle points is calculated
k:
Df '
k=Coeff
n× sqrt ((x
k-x) ^2+ (y
k-y) ^2), wherein n=0,1; K=0,1,2,3;
Df '
0for the Euclidean distance after Place object to the introducing visual angle weight coefficient of bottom right angle point; Df '
1for the Euclidean distance after Place object to the introducing visual angle weight coefficient of lower-left angle point; Df '
2for the Euclidean distance after Place object to the introducing visual angle weight coefficient of upper left angle point; Df '
3for the Euclidean distance after Place object to the introducing visual angle weight coefficient of upper right angle point;
Wherein Coeff
0, Coeff
1for calculating the visual angle weight coefficient of Euclidean distance, calculate Df '
0, Df '
1time weighting heavy Coeff
0, calculate Df '
2, Df '
3time weighting heavy Coeff
1, and meet following relation:
Coeff
0+ Coeff
1=2, wherein Coeff
0<1, Coeff
1>1;
(2), in rear video camera, the Euclidean distance Db ' after Place object to the introducing visual angle weight coefficient of four angle points is calculated
k:
Db '
k=Coefb
n× sqrt ((x
k-x) ^2+ (y
k-y) ^2), wherein n=0,1; K=0,1,2,3;
Db '
0for the Euclidean distance after Place object to the introducing visual angle weight coefficient of upper left angle point; Db '
1for the Euclidean distance after Place object to the introducing visual angle weight coefficient of upper right angle point; Db '
2for the Euclidean distance after Place object to the introducing visual angle weight coefficient of bottom right angle point; Db '
3for the Euclidean distance after Place object to the introducing visual angle weight coefficient of lower-left angle point;
Wherein Coefb
0, Coefb
1for calculating the visual angle weight coefficient of Euclidean distance, calculate Db '
0, Db '
1time weighting heavy Coefb
0, calculate Df '
2, Df '
3time weighting heavy Coefb
1, and meet following relation:
Coefb
0+ Coefb
1=2, wherein Coefb
0>1, Coefb
1<1.
Step 3, respectively normalization Place object in four Euclidean distance values of above-mentioned two video cameras in scope [0,1]:
(1), in front video camera, the distance Dnf ' after normalization
k=Df '
k/ (Df '
0+ Df '
1+ Df '
2+ Df '
3), wherein k=0,1,2,3;
Dnf '
0for the distance after being normalized to the Euclidean distance of bottom right angle point Place object; Dnf '
1for the distance after being normalized to the Euclidean distance of lower-left angle point Place object; Dnf '
2for the distance after being normalized to the Euclidean distance of upper left angle point Place object; Dnf '
3for the distance after being normalized to the Euclidean distance of upper right angle point Place object;
(2), in rear video camera, the distance Dnb ' after normalization
k=Db '
k/ (Db '
0+ Db '
1+ Db '
2+ Db '
3), wherein k=0,1,2,3;
Dnb '
0for the distance after being normalized to the Euclidean distance of upper left angle point Place object; Dnb '
1for the distance after being normalized to the Euclidean distance of upper right angle point Place object; Dnb '
2for the distance after being normalized to the Euclidean distance of bottom right angle point Place object; Dnb '
3for the distance after being normalized to the Euclidean distance of lower-left angle point Place object.
The distance vector of video camera before and after step 4, formation:
The distance vector of front video camera is: [Dnf '
0, Dnf '
1, Dnf '
2, Dnf '
3];
The distance vector of rear video camera is: [Dnb '
0, Dnb '
1, Dnb '
2, Dnb '
3].
Step 5, calculating Place object are at the Euclidean distance DT ' of the distance vector of front and back video camera:
DT’=sqrt((Dnf’
0-Dnb’
0)^2+(Dnf’
1-Dnb’
1)^2+(Dnf’
2-Dnb’
2)^2+(Dnf’
3-Dnb’
3)^2)。
Step 6, given judgment threshold Tresh, carry out the judgement whether target that front and back video camera detects is same position target:
If DT '≤Tresh, then what before and after, video camera detected is same position target;
If DT ' is >Tresh, then what before and after, video camera detected is not same position target.
In sum, the image algorithm of target localization checking in the twin camera that patent of the present invention proposes, without the need to the mutual alignment size of the internal reference and two video cameras of knowing video camera, only need four angle points demarcating intersection region, visual field, according to the distance of the target location detected in each video camera to four angle points, and then the distance between distance vector can be passed through, judge whether the target detected by the video camera of front and back belongs to the target of same position, algorithm is simple, and it is convenient to realize, reliable results.
In addition, it should be noted that, the specific embodiment described in this instructions, the shape, institute's title of being named etc. of its parts and components can be different, and the above content described in this instructions is only to structure example of the present invention explanation.
Claims (4)
1. twin camera target localization verifies a nonparametric technique mutually, and before and after indoor, respectively install a video camera, front and back camera photography machine all can photograph Place object, it is characterized in that: comprise the steps:
Step one, the lower right corner that two camera field of view juxtaposition regions are set respectively in the video camera of front and back, the lower left corner, the upper right corner and four, upper left corner reference point;
The coordinate of the bottom right angle point of juxtaposition region in front video camera is designated as Pf
0, the coordinate of lower-left angle point is designated as Pf
1, the coordinate of upper left angle point is designated as Pf
2, the coordinate of upper right angle point is designated as Pf
3, in this video camera, the coordinate at the center of the Place object detected is designated as Tf;
The coordinate of the upper left angle point of juxtaposition region in rear video camera is designated as Pb
0, the coordinate of upper right angle point is designated as Pb
1, the coordinate of bottom right angle point is designated as Pb
2, the coordinate of lower-left angle point is designated as Pb
3, in this video camera, the coordinate at the center of the Place object detected is designated as Tb;
The Place object detected in video camera before and after step 2, respectively calculating is to the Euclidean distance of four reference point:
(1), the Place object of front video camera is respectively Df to the Euclidean distance of four reference point
0, Df
1, Df
2, Df
3;
(2), the Place object of rear video camera is respectively Db to the Euclidean distance of four reference point
0, Db
1, Db
2, Db
3;
Step 3, respectively normalization Place object in four Euclidean distance values of above-mentioned two video cameras in scope [0,1]:
(1), in front video camera, the distance Dnf after normalization
k=Df
k/ (Df
0+ Df
1+ Df
2+ Df
3), wherein k=0,1,2,3;
(2), in rear video camera, the distance Dnb after normalization
k=Db
k/ (Db
0+ Db
1+ Db
2+ Db
3), wherein k=0,1,2,3;
Step 4, the distance vector of forming position target in the video camera of front and back:
(1), at the distance vector of front video camera be: [Dnf
0, Dnf
1, Dnf
2, Dnf
3];
(2), at the distance vector of rear video camera be: [Dnb
0, Dnb
1, Dnb
2, Dnb
3];
The Euclidean distance DT of step 5, the distance vector of calculating Place object in the video camera of front and back:
DT=sqrt((Dnf
0-Dnb
0)^2+(Dnf
1-Dnb
1)^2+(Dnf
2-Dnb
2)^2+(Dnf
3-Dnb
3)^2);
Step 6, given judgment threshold Tresh, carry out the judgement whether target that front and back video camera detects is same position target:
If DT≤Tresh, then what before and after, video camera detected is same position target;
If DT>Tresh, then what before and after, video camera detected is not same position target.
2. twin camera target localization according to claim 1 verifies nonparametric technique mutually, it is characterized in that: when the target detected in video camera before and after calculating is to the Euclidean distance of four reference point, introduce visual angle weight coefficient: the visual angle weight coefficient of front video camera is Coeff
0, Coeff
1, the visual angle weight coefficient of rear video camera is Coefb
0, Coefb
1;
In front video camera, the Euclidean distance Df ' after Place object to the introducing visual angle weight coefficient of four angle points
k=Coeff
n× Df
k, wherein n=0,1; K=0,1,2,3;
In rear video camera, the Euclidean distance Db ' after Place object to the introducing visual angle weight coefficient of four angle points
k=Coefb
n× Db
k, wherein n=0,1; K=0,1,2,3.
3. twin camera target localization according to claim 1 and 2 verifies nonparametric technique mutually, it is characterized in that:
In front video camera, calculate the Df ' of bottom right angle point
0with the Df ' of lower-left angle point
1time weighting heavy Coeff
0, calculate the Df ' of upper left angle point
2with the Df ' of upper right angle point
3time weighting heavy Coeff
1, and meet following relation: Coeff
0+ Coeff
1=2, wherein Coeff
0<1, Coeff
1>1;
In rear video camera, calculate the Db ' of upper left angle point
0with the Db ' of upper right angle point
1time weighting heavy Coefb
0, calculate the Df ' of bottom right angle point
2with the Df ' of lower-left angle point
3time weighting heavy Coefb
1, and meet following relation: Coefb
0+ Coefb
1=2, wherein Coefb
0>1, Coefb
1<1.
4. twin camera target localization according to claim 1 verifies nonparametric technique mutually, it is characterized in that: the definition base of described threshold value Tresh is by two aspects: when 1, the position of video camera installation is different, at the Euclidean distance DT of the distance vector of target in the video camera of front and back of same position; 2, when the position of video camera installation is identical, at the Euclidean distance DT of the distance vector of target in the video camera of front and back of diverse location; Comprehensive above two kinds of situations, get maximum DT value, and can leave suitable surplus according to actual conditions, be decided to be threshold value Tresh.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510453771.0A CN105069784B (en) | 2015-07-29 | 2015-07-29 | A kind of twin camera target positioning mutually checking nonparametric technique |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510453771.0A CN105069784B (en) | 2015-07-29 | 2015-07-29 | A kind of twin camera target positioning mutually checking nonparametric technique |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105069784A true CN105069784A (en) | 2015-11-18 |
CN105069784B CN105069784B (en) | 2018-01-05 |
Family
ID=54499141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510453771.0A Active CN105069784B (en) | 2015-07-29 | 2015-07-29 | A kind of twin camera target positioning mutually checking nonparametric technique |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105069784B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780606A (en) * | 2016-12-31 | 2017-05-31 | 深圳市虚拟现实技术有限公司 | Four mesh camera positioners and method |
CN109754426A (en) * | 2017-11-01 | 2019-05-14 | 虹软科技股份有限公司 | A kind of method and apparatus for verifying |
CN111104867A (en) * | 2019-11-25 | 2020-05-05 | 北京迈格威科技有限公司 | Recognition model training and vehicle heavy recognition method and device based on component segmentation |
CN111486820A (en) * | 2019-01-25 | 2020-08-04 | 学校法人福冈工业大学 | Measurement system, measurement method, and storage medium |
CN111982061A (en) * | 2020-07-15 | 2020-11-24 | 杭州晨安科技股份有限公司 | Distance measurement method based on different focal lengths of binocular fixed-focus cameras |
CN115862144A (en) * | 2022-12-23 | 2023-03-28 | 杭州晨安科技股份有限公司 | Camera gesture recognition method |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4069932B2 (en) * | 2005-05-23 | 2008-04-02 | オムロン株式会社 | Human detection device and human detection method |
CN101299270A (en) * | 2008-05-27 | 2008-11-05 | 东南大学 | Multiple video cameras synchronous quick calibration method in three-dimensional scanning system |
CN101566465A (en) * | 2009-05-18 | 2009-10-28 | 西安交通大学 | Method for measuring object deformation in real time |
CN101931793A (en) * | 2009-06-18 | 2010-12-29 | 霍尼韦尔国际公司 | To observing the system and method that limited video surveillance fields carries out addressing |
CN101969548A (en) * | 2010-10-15 | 2011-02-09 | 中国人民解放军国防科学技术大学 | Active video acquiring method and device based on binocular camera shooting |
US20120069192A1 (en) * | 2009-10-20 | 2012-03-22 | Qing-Hu Li | Data Processing System and Method |
CN102831385A (en) * | 2011-06-13 | 2012-12-19 | 索尼公司 | Device and method for target identification in multiple-camera monitoring network |
CN103473758A (en) * | 2013-05-13 | 2013-12-25 | 中国科学院苏州生物医学工程技术研究所 | Secondary calibration method of binocular stereo vision system |
CN103557834A (en) * | 2013-11-20 | 2014-02-05 | 无锡儒安科技有限公司 | Dual-camera-based solid positioning method |
CN103777643A (en) * | 2012-10-23 | 2014-05-07 | 北京网动网络科技股份有限公司 | Automatic camera tracking system based on image positioning and tracking method |
CN104021538A (en) * | 2013-02-28 | 2014-09-03 | 株式会社理光 | Object positioning method and device |
CN104217439A (en) * | 2014-09-26 | 2014-12-17 | 南京工程学院 | Indoor visual positioning system and method |
-
2015
- 2015-07-29 CN CN201510453771.0A patent/CN105069784B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4069932B2 (en) * | 2005-05-23 | 2008-04-02 | オムロン株式会社 | Human detection device and human detection method |
CN101299270A (en) * | 2008-05-27 | 2008-11-05 | 东南大学 | Multiple video cameras synchronous quick calibration method in three-dimensional scanning system |
CN101566465A (en) * | 2009-05-18 | 2009-10-28 | 西安交通大学 | Method for measuring object deformation in real time |
CN101931793A (en) * | 2009-06-18 | 2010-12-29 | 霍尼韦尔国际公司 | To observing the system and method that limited video surveillance fields carries out addressing |
US20120069192A1 (en) * | 2009-10-20 | 2012-03-22 | Qing-Hu Li | Data Processing System and Method |
CN101969548A (en) * | 2010-10-15 | 2011-02-09 | 中国人民解放军国防科学技术大学 | Active video acquiring method and device based on binocular camera shooting |
CN102831385A (en) * | 2011-06-13 | 2012-12-19 | 索尼公司 | Device and method for target identification in multiple-camera monitoring network |
CN103777643A (en) * | 2012-10-23 | 2014-05-07 | 北京网动网络科技股份有限公司 | Automatic camera tracking system based on image positioning and tracking method |
CN104021538A (en) * | 2013-02-28 | 2014-09-03 | 株式会社理光 | Object positioning method and device |
CN103473758A (en) * | 2013-05-13 | 2013-12-25 | 中国科学院苏州生物医学工程技术研究所 | Secondary calibration method of binocular stereo vision system |
CN103557834A (en) * | 2013-11-20 | 2014-02-05 | 无锡儒安科技有限公司 | Dual-camera-based solid positioning method |
CN104217439A (en) * | 2014-09-26 | 2014-12-17 | 南京工程学院 | Indoor visual positioning system and method |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780606A (en) * | 2016-12-31 | 2017-05-31 | 深圳市虚拟现实技术有限公司 | Four mesh camera positioners and method |
CN109754426A (en) * | 2017-11-01 | 2019-05-14 | 虹软科技股份有限公司 | A kind of method and apparatus for verifying |
CN109754426B (en) * | 2017-11-01 | 2021-04-23 | 虹软科技股份有限公司 | Method, system and device for verifying camera calibration parameters |
CN111486820A (en) * | 2019-01-25 | 2020-08-04 | 学校法人福冈工业大学 | Measurement system, measurement method, and storage medium |
CN111486820B (en) * | 2019-01-25 | 2022-05-31 | 学校法人福冈工业大学 | Measurement system, measurement method, and storage medium |
CN111104867A (en) * | 2019-11-25 | 2020-05-05 | 北京迈格威科技有限公司 | Recognition model training and vehicle heavy recognition method and device based on component segmentation |
CN111104867B (en) * | 2019-11-25 | 2023-08-25 | 北京迈格威科技有限公司 | Recognition model training and vehicle re-recognition method and device based on part segmentation |
CN111982061A (en) * | 2020-07-15 | 2020-11-24 | 杭州晨安科技股份有限公司 | Distance measurement method based on different focal lengths of binocular fixed-focus cameras |
CN111982061B (en) * | 2020-07-15 | 2022-08-23 | 杭州晨安科技股份有限公司 | Distance measurement method based on different focal lengths of binocular fixed-focus cameras |
CN115862144A (en) * | 2022-12-23 | 2023-03-28 | 杭州晨安科技股份有限公司 | Camera gesture recognition method |
CN115862144B (en) * | 2022-12-23 | 2023-06-23 | 杭州晨安科技股份有限公司 | Gesture recognition method for camera |
Also Published As
Publication number | Publication date |
---|---|
CN105069784B (en) | 2018-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105069784A (en) | Double-camera target positioning mutual authentication nonparametric method | |
CN105072414B (en) | A kind of target detection and tracking and system | |
AU2011202555B2 (en) | Multi-view alignment based on fixed-scale ground plane rectification | |
CN107121125B (en) | A kind of communication base station antenna pose automatic detection device and method | |
US11558547B2 (en) | Automated guide for image capturing for 3D model creation | |
CN110287519A (en) | A kind of the building engineering construction progress monitoring method and system of integrated BIM | |
CN102622767B (en) | Method for positioning binocular non-calibrated space | |
US20210152802A1 (en) | Apparatus and method for generating a representation of a scene | |
CN103700140B (en) | Spatial modeling method used for linkage of single gun camera and multiple dome cameras | |
CN107909615A (en) | A kind of fire monitor localization method based on binocular vision | |
CN106033614B (en) | A kind of mobile camera motion object detection method under strong parallax | |
CN103077524A (en) | Calibrating method of hybrid vision system | |
CN105741261B (en) | Plane multi-target positioning method based on four cameras | |
CN107421473A (en) | The two beam laser coaxial degree detection methods based on image procossing | |
CN105243637A (en) | Panorama image stitching method based on three-dimensional laser point cloud | |
CN104463899A (en) | Target object detecting and monitoring method and device | |
CN107038714B (en) | Multi-type visual sensing cooperative target tracking method | |
US20240104714A1 (en) | Construction inspection method, apparatus, and system | |
Shen et al. | Extrinsic calibration for wide-baseline RGB-D camera network | |
CN103971479A (en) | Forest fire positioning method based on camera calibration technology | |
CN103884332B (en) | A kind of barrier decision method, device and mobile electronic device | |
US8837813B2 (en) | Mobile three dimensional imaging system | |
CN208754392U (en) | A kind of three lens camera and the education recorded broadcast device with it | |
CN105894505A (en) | Quick pedestrian positioning method based on multi-camera geometrical constraint | |
TWI760128B (en) | Method and system for generating depth image and positioning system using the method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: Hangzhou City, Zhejiang Province, Xihu District three, 310030 West Lake science and Technology Park West Road No. 16 building 4 layer 4 Applicant after: Hangzhou morning Ann Polytron Technologies Inc Address before: Hangzhou City, Zhejiang Province, Xihu District three, 310030 West Lake science and Technology Park West Road No. 16 building 4 layer 4 Applicant before: Hangzhou Chingan Vision Digital Technology Co., Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |