CN103557834A - Dual-camera-based solid positioning method - Google Patents

Dual-camera-based solid positioning method Download PDF

Info

Publication number
CN103557834A
CN103557834A CN201310589445.3A CN201310589445A CN103557834A CN 103557834 A CN103557834 A CN 103557834A CN 201310589445 A CN201310589445 A CN 201310589445A CN 103557834 A CN103557834 A CN 103557834A
Authority
CN
China
Prior art keywords
camera
entity
dual
photocentre
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310589445.3A
Other languages
Chinese (zh)
Other versions
CN103557834B (en
Inventor
张兰
毛续飞
刘云浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tellhow Intelligent Engineering Co ltd
Original Assignee
WUXI RUIAN TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUXI RUIAN TECHNOLOGY CO LTD filed Critical WUXI RUIAN TECHNOLOGY CO LTD
Priority to CN201310589445.3A priority Critical patent/CN103557834B/en
Publication of CN103557834A publication Critical patent/CN103557834A/en
Application granted granted Critical
Publication of CN103557834B publication Critical patent/CN103557834B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves

Abstract

The invention discloses a dual-camera-based solid positioning method. An object to be positioned is passively positioned by using a pair of short-distance common cameras and does not need to carry or is configured with any device and is not actively required to be positioned. Compared with a conventional positioning method, the dual-camera-based solid positioning method has the advantages that passive positioning is achieved, namely, the object to be positioned can be positioned without being configured with any device; the positioning is realized by only using two common cameras, the configuration is convenient and the cost is low; the application property is wide, namely, the indoor and outdoor difference caused by GPS (Global Position System) positioning and other positioning modes does not exist in the camera positioning, the dual-camera-based solid positioning method can be applied to any occasion; errors are small, namely, in comparison with the position information obtained from positioning methods based on wireless signals, and the like, the position information obtained from a video image is small in errors and high in precision.

Description

A kind of entity localization method based on dual camera
Technical field
The present invention relates to location technology, relate in particular to a kind of entity localization method based on dual camera.
Background technology
Along with popularizing of Intelligent mobile equipment, and the development of location technology, positional information is being played the part of more and more important role in people's life, for user provides various significant services, for example location and navigation, user can use GPS navigation instrument to locate current position, and combining cartographic information can provide the route that arrives destination; Peripheral information search and navigation, user can be according to information such as all kinds of information on services such as near the station current position enquiring, dining room, bank, refuelling station and advertising promotion, and provide the route that arrives relevant position; Location-based social networks, helps user to find just nigh good friend or the identical people of interest, carries out social interaction; Location-based game, can allow user in game, complete interaction with real geographic position.In addition, positional information and safety problem are closely related, also have important application in security protection, as location and the tracking to invasion personnel, to losing the location finding etc. of article.
Existing location technology is general main is for active location, and user wants the position of knowing oneself just to go to location.Such as: GPS (GPS), user generally will carry mobile phone or the navigating instrument of GPS module, by positioning with the communication of satellite; Radio-frequency (RF) identification (RFID) location requires the people or the object that are positioned to have RFID label; Location General Requirements based on sound ranging is used the equipment that can send sound wave and receive sound wave; Location technology based on wireless signal (as the location technology based on WiFi fingerprint) will be used the equipment (being generally smart mobile phone) that can receive wireless signal.These location technologies all require user to hold some necessary equipment, and initiatively want location aware, and in the application of security protection class, such as the application such as safety management of public place, we need monitor's quantity, people's position, the stream of people's the flow direction etc., in these situations, not necessarily everyone carries positioning equipment, and even people do not wish oneself to be positioned, and at this time active location technology is inapplicable.
Summary of the invention
The object of the invention is to, by a kind of entity localization method based on dual camera, solve the problem that above background technology is partly mentioned.
For reaching this object, the present invention by the following technical solutions:
An entity localization method for dual camera, it comprises the steps:
A, calibration: the first camera and second camera that parameter is identical are deployed on same level line, and keep the distance between the first camera and the photocentre of second camera to be fixed as d a, and utilize calibration chart picture to calibrate the first camera and second camera;
B, detection and location entity: in the first camera and second camera, adopt respectively image recognition technology to detect identification to the entity that is positioned, determine the pixel set of each identification entity;
C, entity coupling: the pixel set to the entity that is positioned in the first camera and second camera is mated, and calculates the corresponding relation of entity in the first camera and second camera;
D, entity location: one by one to each entity, the vision difference based on it in the first camera and second camera, utilizes stereotriangulation principle computational entity in the position in space.
Especially, in described steps A, utilize calibration chart picture to calibrate the first camera and second camera, specifically comprise: utilize calibration chart picture to calibrate the first camera and second camera, obtain the calibration parameter of each camera, this calibration parameter has defined the pixel mapping relations from distorted image to accurate image.
Especially, in described step C, calculate the corresponding relation of entity in the first camera and second camera, specifically comprise: the pixel set to each entity, utilize feature point detection general in image processing techniques and proper vector extracting method, obtain unique point and the proper vector thereof of each entity, by calculating the similarity of substance feature vector in the first camera and second camera image, the same entity matching in the first camera and second camera image, and obtain the unique point that each entity matches.
Especially, in described step D, utilize stereotriangulation principle computational entity in the position in space, specifically comprise: by the photocentre location position of the first camera, be coordinate origin O (0,0,0); Direction from the first camera photocentre to second camera photocentre is X-axis positive dirction, through the direction of photocentre vertical imaging plane, is Z axis positive dirction, with XZ axle be Y-axis positive dirction vertically downward; The photocentre of second camera is (d a, 0,0), the intersection point of the first camera and second camera optical axis and imaging plane is principal point, the principal point coordinate that makes the first camera is O l(o x, o y),, for a unique point P who is decided to be on entity, its true coordinate is (X, Y, Z), the position of the imaging point in the first camera and second camera in two pictures is respectively P l(x l, y l), P r(x r, y r); If the focal length of the first camera and second camera is f, the imaging at the first camera in P o'clock meets following relation:
λ L x L y L 1 = f 0 o x 0 f o y 0 0 1 X Y Z , K = f 0 o x 0 f o y 0 0 1
P meets following relation in the imaging of second camera:
λ R x R y R 1 = K X Y Z - K d a 0 0
Wherein, λ land λ rbe respectively the zooming parameter of the first camera and second camera, for identical parameters camera, its zooming parameter also equates; The imaging relations of utilizing above the first camera and second camera, therefrom solves unknown number (X, Y, Z), i.e. the true coordinate of P point in space, thus definite locus that is decided to be all unique points of entity draws the locus that is decided to be entity.
Entity localization method based on dual camera provided by the invention utilizes a pair of closely common camera to carry out passive type location to anchored object, does not need the entity being positioned to carry or configure any equipment, and the entity that also do not need to be positioned initiatively requires location.Compare with traditional localization method, tool of the present invention has the following advantages: one, passive type location.Do not need can realize location for any equipment of the physical arrangements that is positioned.Two, only need to use two common cameras to locate, there is the feature that cost is low that facilitates of disposing.Three, applicability is wide.Use camera location to be different from the locator meamss such as GPS and have indoor and outdoor difference, the present invention all can apply in any place.Four, error is little.The positional information that obtains from video image than little based on localization method errors such as wireless signals, precision is high.
Accompanying drawing explanation
The entity localization method process flow diagram based on dual camera that Fig. 1 provides for the embodiment of the present invention;
Fig. 2 utilizes stereotriangulation principle computational entity position view for what the embodiment of the present invention provided.
Embodiment
Below in conjunction with drawings and Examples, the invention will be further described.Be understandable that, specific embodiment described herein is only for explaining the present invention, but not limitation of the invention.It also should be noted that, for convenience of description, in accompanying drawing, only show part related to the present invention but not full content.
Please refer to shown in Fig. 1 the entity localization method process flow diagram based on dual camera that Fig. 1 provides for the embodiment of the present invention.
Entity localization method based on dual camera in the present embodiment specifically comprises the steps:
Step S101, calibration: the first camera and second camera that parameter is identical are deployed on same level line, and keep the distance between the first camera and the photocentre of second camera to be fixed as d aand utilize calibration chart picture (as black and white lattice chessboard) to the first camera and second camera calibration, obtain the calibration parameter of each camera, this calibration parameter has defined the pixel mapping relations from distorted image to accurate image, to guarantee that the camera image of inputting follow-up positioning step does not have remarkable distortion distortion after Parameter Mapping is processed.
Step S102, detection and location entity: in the first camera and second camera, adopt respectively image recognition technology to detect identification to the entity that is positioned (as pedestrian, vehicle etc.), determine the pixel set of each identification entity.
Step S103, entity coupling: the pixel set to the entity that is positioned in the first camera and second camera is mated, the corresponding relation that calculates entity in the first camera and second camera is which the pixel set in the corresponding second camera of the pixel set of each entity in the first camera.Its detailed process is: the pixel set to each entity, utilize feature point detection general in image processing techniques and proper vector extracting method, obtain unique point and the proper vector thereof of each entity, by calculating the similarity of substance feature vector in the first camera and second camera image, the same entity matching in the first camera and second camera image, and obtain the unique point that each entity matches.
Step S104, entity location: one by one to each entity, the vision difference based on it in the first camera and second camera, utilizes stereotriangulation principle computational entity in the position in space.As shown in Figure 1,201 is the first camera, and 202 is second camera.By the photocentre location position of the first camera, it is coordinate origin O (0,0,0); Direction from the first camera photocentre to second camera photocentre is X-axis positive dirction, through the direction of photocentre vertical imaging plane, is Z axis positive dirction, with XZ axle be Y-axis positive dirction vertically downward; The photocentre of second camera is (d a, 0,0), the intersection point of the first camera and second camera optical axis and imaging plane is principal point, the principal point coordinate that makes the first camera is O l(o x, o y),, for a unique point P who is decided to be on entity, its true coordinate is (X, Y, Z), the position of the imaging point in the first camera and second camera in two pictures is respectively P l(x l, y l), P r(x r, y r); If the focal length of the first camera and second camera is f, the imaging at the first camera in P o'clock meets following relation:
λ L x L y L 1 = f 0 o x 0 f o y 0 0 1 X Y Z , K = f 0 o x 0 f o y 0 0 1
P meets following relation in the imaging of second camera:
λ R x R y R 1 = K X Y Z - K d a 0 0
Wherein, λ land λ rbe respectively the zooming parameter of the first camera and second camera, for identical parameters camera, its zooming parameter also equates; The imaging relations of utilizing above the first camera and second camera, therefrom solves unknown number (X, Y, Z), i.e. the true coordinate of P point in space, thus definite locus that is decided to be all unique points of entity draws the locus that is decided to be entity.
Compare with traditional localization method, technical scheme tool of the present invention has the following advantages: one, passive type location.Utilize a pair of closely common camera to carry out passive type location to anchored object, do not need the entity being positioned to carry or configure any equipment, the entity that also do not need to be positioned initiatively requires location.Two, only need to use two common cameras to locate, there is the feature that cost is low that facilitates of disposing.Three, applicability is wide.Use camera location to be different from the locator meamss such as GPS and have indoor and outdoor difference, the present invention all can apply in any place.Four, error is little.The positional information that obtains from video image than little based on localization method errors such as wireless signals, precision is high.
Note, above are only preferred embodiment of the present invention and institute's application technology principle.Skilled person in the art will appreciate that and the invention is not restricted to specific embodiment described here, can carry out for a person skilled in the art various obvious variations, readjust and substitute and can not depart from protection scope of the present invention.Therefore, although the present invention is described in further detail by above embodiment, the present invention is not limited only to above embodiment, in the situation that not departing from the present invention's design, can also comprise more other equivalent embodiment, and scope of the present invention is determined by appended claim scope.

Claims (4)

1. the entity localization method based on dual camera, is characterized in that, comprises the steps:
A, calibration: the first camera and second camera that parameter is identical are deployed on same level line, and keep the distance between the first camera and the photocentre of second camera to be fixed as d a, and utilize calibration chart picture to calibrate the first camera and second camera;
B, detection and location entity: in the first camera and second camera, adopt respectively image recognition technology to detect identification to the entity that is positioned, determine the pixel set of each identification entity;
C, entity coupling: the pixel set to the entity that is positioned in the first camera and second camera is mated, and calculates the corresponding relation of entity in the first camera and second camera;
D, entity location: one by one to each entity, the vision difference based on it in the first camera and second camera, utilizes stereotriangulation principle computational entity in the position in space.
2. the entity localization method based on dual camera according to claim 1, is characterized in that, utilizes calibration chart picture to calibrate the first camera and second camera in described steps A, specifically comprises:
Utilize calibration chart picture to calibrate the first camera and second camera, obtain the calibration parameter of each camera, this calibration parameter has defined the pixel mapping relations from distorted image to accurate image.
3. the entity localization method based on dual camera according to claim 1, is characterized in that, calculates the corresponding relation of entity in the first camera and second camera in described step C, specifically comprises:
Pixel set to each entity, utilize feature point detection general in image processing techniques and proper vector extracting method, obtain unique point and the proper vector thereof of each entity, by calculating the similarity of substance feature vector in the first camera and second camera image, the same entity matching in the first camera and second camera image, and obtain the unique point that each entity matches.
4. according to the entity localization method based on dual camera one of claims 1 to 3 Suo Shu, it is characterized in that, in described step D, utilize stereotriangulation principle computational entity in the position in space, specifically comprise:
By the photocentre location position of the first camera, it is coordinate origin O (0,0,0); Direction from the first camera photocentre to second camera photocentre is X-axis positive dirction, through the direction of photocentre vertical imaging plane, is Z axis positive dirction, with XZ axle be Y-axis positive dirction vertically downward; The photocentre of second camera is (d a, 0,0), the intersection point of the first camera and second camera optical axis and imaging plane is principal point, the principal point coordinate that makes the first camera is O l(o x, o y),, for a unique point P who is decided to be on entity, its true coordinate is (X, Y, Z), the position of the imaging point in the first camera and second camera in two pictures is respectively P l(x l, y l), P r(x r, y r); If the focal length of the first camera and second camera is f, the imaging at the first camera in P o'clock meets following relation:
λ L x L y L 1 = f 0 o x 0 f o y 0 0 1 X Y Z , K = f 0 o x 0 f o y 0 0 1
P meets following relation in the imaging of second camera:
λ R x R y R 1 = K X Y Z - K d a 0 0
Wherein, λ land λ rbe respectively the zooming parameter of the first camera and second camera, for identical parameters camera, its zooming parameter also equates; The imaging relations of utilizing above the first camera and second camera, therefrom solves unknown number (X, Y, Z), i.e. the true coordinate of P point in space, thus definite locus that is decided to be all unique points of entity draws the locus that is decided to be entity.
CN201310589445.3A 2013-11-20 2013-11-20 A kind of entity localization method based on dual camera Active CN103557834B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310589445.3A CN103557834B (en) 2013-11-20 2013-11-20 A kind of entity localization method based on dual camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310589445.3A CN103557834B (en) 2013-11-20 2013-11-20 A kind of entity localization method based on dual camera

Publications (2)

Publication Number Publication Date
CN103557834A true CN103557834A (en) 2014-02-05
CN103557834B CN103557834B (en) 2016-01-20

Family

ID=50012145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310589445.3A Active CN103557834B (en) 2013-11-20 2013-11-20 A kind of entity localization method based on dual camera

Country Status (1)

Country Link
CN (1) CN103557834B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069784A (en) * 2015-07-29 2015-11-18 杭州晨安视讯数字技术有限公司 Double-camera target positioning mutual authentication nonparametric method
CN105606086A (en) * 2015-08-28 2016-05-25 宇龙计算机通信科技(深圳)有限公司 Positioning method and terminal
CN109587628A (en) * 2018-12-14 2019-04-05 深圳力维智联技术有限公司 A kind of interior real-time location method and device
CN111157007A (en) * 2020-01-16 2020-05-15 深圳市守行智能科技有限公司 Indoor positioning method using cross vision
CN111853477A (en) * 2020-07-27 2020-10-30 盐城工学院 Double-camera positioning device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102062596A (en) * 2010-11-12 2011-05-18 中兴通讯股份有限公司 Method and device for measuring distance by utilizing double cameras
CN102567714A (en) * 2011-12-14 2012-07-11 深圳市中控生物识别技术有限公司 Method for correcting color image and black-and-white image based on double-camera face identification
CN103063193A (en) * 2012-11-30 2013-04-24 青岛海信电器股份有限公司 Method and device for ranging by camera and television
CN103090875A (en) * 2012-11-26 2013-05-08 华南理工大学 Real-time real-scene matching vehicle navigation method and device based on double cameras

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102062596A (en) * 2010-11-12 2011-05-18 中兴通讯股份有限公司 Method and device for measuring distance by utilizing double cameras
CN102567714A (en) * 2011-12-14 2012-07-11 深圳市中控生物识别技术有限公司 Method for correcting color image and black-and-white image based on double-camera face identification
CN103090875A (en) * 2012-11-26 2013-05-08 华南理工大学 Real-time real-scene matching vehicle navigation method and device based on double cameras
CN103063193A (en) * 2012-11-30 2013-04-24 青岛海信电器股份有限公司 Method and device for ranging by camera and television

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱宗磊: "基于Windows平台的双摄像头运动目标检测与跟踪系统", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069784A (en) * 2015-07-29 2015-11-18 杭州晨安视讯数字技术有限公司 Double-camera target positioning mutual authentication nonparametric method
CN105069784B (en) * 2015-07-29 2018-01-05 杭州晨安科技股份有限公司 A kind of twin camera target positioning mutually checking nonparametric technique
CN105606086A (en) * 2015-08-28 2016-05-25 宇龙计算机通信科技(深圳)有限公司 Positioning method and terminal
CN109587628A (en) * 2018-12-14 2019-04-05 深圳力维智联技术有限公司 A kind of interior real-time location method and device
CN111157007A (en) * 2020-01-16 2020-05-15 深圳市守行智能科技有限公司 Indoor positioning method using cross vision
CN111853477A (en) * 2020-07-27 2020-10-30 盐城工学院 Double-camera positioning device

Also Published As

Publication number Publication date
CN103557834B (en) 2016-01-20

Similar Documents

Publication Publication Date Title
Zhu et al. Three-dimensional VLC positioning based on angle difference of arrival with arbitrary tilting angle of receiver
US7991194B2 (en) Apparatus and method for recognizing position using camera
US20200279434A1 (en) System and method for determining geo-location(s) in images
US9080882B2 (en) Visual OCR for positioning
TWI391632B (en) Position/navigation system using identification tag and position/navigation method
US9424672B2 (en) Method and apparatus for processing and aligning data point clouds
US7689001B2 (en) Method for recognizing location using built-in camera and device thereof
US8494553B2 (en) Position determination using horizontal angles
CN104936283A (en) Indoor positioning method, server and system
CN110148185A (en) Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter
CN103557834B (en) A kind of entity localization method based on dual camera
CN104378735A (en) Indoor positioning method, client side and server
KR101442703B1 (en) GPS terminal and method for modifying location position
CN105387857A (en) Navigation method and device
JP6165422B2 (en) Information processing system, information processing device, server, terminal device, information processing method, and program
WO2019153855A1 (en) Object information acquisition system capable of 360-degree panoramic orientation and position sensing, and application thereof
Dabove et al. Positioning techniques with smartphone technology: Performances and methodologies in outdoor and indoor scenarios
KR20120009638A (en) Method for Managing Virtual-Object Data about Non-Recognition Reality-Object, Augmented Reality Device and Recording Medium
KR20100060472A (en) Apparatus and method for recongnizing position using camera
Jain et al. A study on Indoor navigation techniques using smartphones
CN103591953A (en) Personnel location method based on single camera
Chai et al. Multi-sensor fusion-based indoor single-track semantic map construction and localization
Jiao et al. A hybrid of smartphone camera and basestation wide-area indoor positioning method
US20200132799A1 (en) System and method for positioning a terminal device
KR101061360B1 (en) Server, terminal and method obtaining location information of target mobile terminal using direction information based on earth magnetism

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 214135 Room 501, A District, Qingyuan Road, Wuxi science and Technology Park, Wuxi New District, Jiangsu

Patentee after: RUN TECHNOLOGY CO.,LTD.

Address before: 214135 Room 501, A District, Qingyuan Road, Wuxi science and Technology Park, Wuxi New District, Jiangsu

Patentee before: WUXI RUN TECHNOLOGY CO.,LTD.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180516

Address after: 100176 Beijing Beijing economic and Technological Development Zone Yuncheng Street 2, 1 A block 11 11 1109

Patentee after: BEIJING TAIHAO INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 214135 Room 501, A District, Qingyuan Road, Wuxi science and Technology Park, Wuxi New District, Jiangsu

Patentee before: RUN TECHNOLOGY CO.,LTD.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200927

Address after: 100176 three floor, 1 Building 3, Jinxiu street, Beijing economic and Technological Development Zone, Daxing District, Beijing.

Patentee after: BEIJING TELLHOW INTELLIGENT ENGINEERING Co.,Ltd.

Address before: 100176 Beijing Beijing economic and Technological Development Zone Yuncheng Street 2, 1 A block 11 11 1109

Patentee before: BEIJING TAIHAO INFORMATION TECHNOLOGY Co.,Ltd.