CN111157007A - Indoor positioning method using cross vision - Google Patents

Indoor positioning method using cross vision Download PDF

Info

Publication number
CN111157007A
CN111157007A CN202010044955.2A CN202010044955A CN111157007A CN 111157007 A CN111157007 A CN 111157007A CN 202010044955 A CN202010044955 A CN 202010044955A CN 111157007 A CN111157007 A CN 111157007A
Authority
CN
China
Prior art keywords
positioning
cameras
target
targets
indoor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010044955.2A
Other languages
Chinese (zh)
Inventor
杨嘉盛
廖镜森
王亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shouhang Intelligent Technology Co Ltd
Original Assignee
Shenzhen Shouhang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shouhang Intelligent Technology Co Ltd filed Critical Shenzhen Shouhang Intelligent Technology Co Ltd
Priority to CN202010044955.2A priority Critical patent/CN111157007A/en
Publication of CN111157007A publication Critical patent/CN111157007A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Abstract

The invention relates to the technical field of indoor object positioning, in particular to an indoor positioning method by utilizing cross vision; the method takes a cross overlapping area formed by two cameras as a positioning range, utilizes a depth learning algorithm to carry out target detection and key point positioning, then utilizes a machine learning method to match targets in the two cameras, and finally completes multi-target positioning in a visual area according to a target matching result and the distance between the two cameras; s1, deploying the system; s2, measuring coordinates; s3, detecting a Yolo-v3 target, firstly matching the targets if the targets are multiple, then positioning the Landmark, and directly positioning the Landmark if the targets are not multiple; s4, calculating the distance between the target object and each camera; and S5, completing the positioning of the target object by calculation.

Description

Indoor positioning method using cross vision
Technical Field
The invention relates to the technical field of indoor object positioning, in particular to an indoor positioning method by utilizing cross vision.
Background
As is well known, with the increase of data services and multimedia services, people have increasingly increased demands for positioning and navigation, and especially in complex indoor environments, such as frequent halls, exhibition halls, supermarkets, libraries, underground parking lots, and the like, the positioning system cannot accurately position the position of an object due to sudden signal reduction or signal interference caused by the traffic of pedestrians, the blocking of walls or stone columns and other large objects.
The currently common indoor wireless positioning technologies include an indoor GPS positioning technology, an infrared indoor positioning technology, an ultrasonic positioning technology, a Bluetooth technology, a Wi-Fi technology and the like, and have the following defects:
1. the indoor GPS positioning technology uses a GPS receiver to position a target location, but signals are greatly attenuated by the influence of buildings, positioning accuracy is low, and the cost of a locator terminal is high.
2. The infrared indoor positioning technology utilizes an infrared IR mark to emit modulated infrared rays, and the modulated infrared rays are received by an optical sensor installed indoors for positioning. However, infrared rays cannot penetrate through barriers, so that infrared rays can only be transmitted at a sight distance, and the straight sight distance and the transmission distance are short, so that the infrared ray transmission device is easily interfered by fluorescent lamps or light in rooms, and cannot be suitable for the situation of complicated people flow in airport halls.
3. The ultrasonic positioning technology mainly adopts a reflection type distance measurement method, and calculates the distance to be measured by using the time difference between an echo and a transmitted wave according to the transmission of ultrasonic waves and the reception of the echo generated by a measured object. However, the method cannot be applied to the situation of complicated people flow in an airport hall, and the positioning technology needs a large amount of investment in bottom hardware facilities, so that the cost is too high.
4. The Bluetooth technology carries out positioning by measuring signal intensity, and is a short-distance low-power-consumption wireless transmission technology. But the technology is mainly applied to small-range positioning, and the airport lounge can have interference of a plurality of Bluetooth signals, and the technology is not suitable for the complex environment of the airport lounge.
Wi-Fi technology enables positioning, monitoring and tracking tasks in complex environments to be achieved through a Wireless Local Area Network (WLAN) formed by wireless access points. However, the technology is easily interfered by other signals, so that the precision of the technology is influenced, and the energy consumption of the positioner is high.
Disclosure of Invention
In order to solve the technical problems, the invention provides an indoor positioning method utilizing cross vision, which takes a cross overlapping area formed by imaging of two cameras as a positioning range, utilizes a depth learning algorithm to carry out target detection and key point positioning, then utilizes a machine learning method to match targets in the two cameras, and finally completes multi-target positioning in a visual area according to a target matching result and the distance between the two cameras.
The invention relates to an indoor positioning method by using cross vision, which comprises the following steps:
s1, deploying the system;
s2, measuring coordinates;
s3, detecting a Yolo-v3 target, firstly matching the targets if the targets are multiple, then positioning the Landmark, and directly positioning the Landmark if the targets are not multiple;
s4, calculating the distance between the target object and each camera;
and S5, completing the positioning of the target object by calculation.
The invention relates to an indoor positioning method by utilizing cross vision, which is based on a positioning system, and the positioning system comprises
Two high-definition cameras;
the spacing distance between the two cameras, the installation height and the angle between the cameras and the ground visual angle are known;
and one computer host is used for target detection, matching and positioning.
Compared with the prior art, the invention has the beneficial effects that:
1) indoor positioning is carried out by using a visual method, the equipment is simple, and the cost is low;
2) the cameras of the indoor positioning method are arranged at high positions, only two cameras are needed, the indoor environment does not need to be changed when the cameras are installed and deployed, any equipment does not need to be additionally arranged on the target needing to be positioned, and the indoor positioning method has the characteristics of flexible and convenient deployment and multiple environment adaptation;
3) the target in the visual region is detected by adopting a deep learning method, so that the interference of indoor pedestrians, seats, machine equipment and the like can be effectively removed, the robustness is strong, and the method is suitable for complex environments;
4) by the target detection and matching method, a plurality of targets can be positioned at the same time, the multi-target positioning function is realized, and the method has the characteristic of strong practicability;
5) the indoor positioning method of the invention utilizes the vision cross area of the two cameras to carry out multi-target indoor positioning, and utilizes deep learning to detect, position and match targets, thus being capable of positioning a plurality of targets and various targets at the same time, and having small positioning error and high precision.
Drawings
FIG. 1 is an indoor positioning deployment scenario of the present invention;
FIG. 2 is a schematic diagram of imaging of two high definition cameras;
fig. 3 is a flow chart of the indoor positioning of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
The airport hall is taken as a case scene, the baggage carts in the hall are taken as indoor positioning targets for illustration, the baggage carts are taken as resources in the airport, and if the position information of the baggage carts can be accurately provided, the boarding/departing time of passengers can be saved, and the baggage cart resources can be reasonably and effectively managed and utilized by airport departments; meanwhile, the airport hall is used as an indoor environment and has the characteristics of large pedestrian flow, complex environment, large wireless signal interference and the like. In conclusion, the baggage pushcart in the airport hall is taken as a research object of the indoor positioning method, so that the complex indoor environment requirement is met, the pain point in the industry can be solved, the travel of passengers is facilitated, and the resource utilization rate is improved.
S1 deployment scheme of indoor positioning of the invention is shown in figure 1, and the positions (X) of two cameras are known1,Y1,Z1)、(X2,Y2,Z2) The coordinates of the baggage cart are (X, Y, Z), the following equation is given:
Figure BDA0002368915160000031
Figure BDA0002368915160000041
the airport hall is used as a horizontal plane, and the height coordinate system Z takes the ground as a reference coordinate system, then the height values of the luggage barrow are all 0, so the three-dimensional coordinate relation can be simplified into a two-dimensional relation, and the equation is as follows:
Figure BDA0002368915160000042
Figure BDA0002368915160000043
the coordinates (X, Y) of the luggage trolley can be obtained by two unknowns of the two equations;
s2, taking the image as a two-dimensional plane, and when the two cameras have the same parameters and the coordinate positions are known, as shown in fig. 2, the coordinates of the baggage cart in the two pictures are (x ', y') and (x ", y"), respectively, then the pixel distance r of the baggage cart can be solved1And r2Knowing the coordinate positions of the two cameras, the actual distance corresponding to each pixel point of the two cameras can be solved as S1And S2Then equation (2) can be expressed as:
Figure BDA0002368915160000044
Figure BDA0002368915160000045
the coordinates (± X, ± Y) of the baggage cart may be solved and finally the coordinates (X, Y) of the baggage cart may be determined from the image coordinates (X ', Y') and (X ", Y");
s3, based on the positioning principle of the luggage cart, detecting the position of the luggage cart in the image by using a Yolo-v3 target detection model, and then determining the coordinates (x ') of the two rear wheels of the luggage cart by using a Landmark key point positioning model'1,y′1) And (x'2,y′2) The middle point of the two wheels is the coordinate (x ', y') of the baggage trolley on the camera 1, and the coordinate (x ', y') of the baggage trolley on the camera 2 is positioned in the same way;
s4, the image reference coordinate of the camera 1 is known as (x)1,y1) The actual distance corresponding to each pixel point is S1(ii) a Camera 2 image reference coordinate is (x)2,y2) The actual distance corresponding to each pixel point is S2; then according to the equation:
Figure BDA0002368915160000046
s5, solving for R1And R2Substituting the coordinate world coordinates (X, Y) of the trolley into the formula (2) to obtain the coordinate world coordinates (X, Y) of the trolley, and completing indoor positioning of the single luggage trolley;
when a plurality of luggage trolleys appear in a detection area, the luggage trolleys in two cameras need to be matched in pairs, the matching method comprises the steps of respectively setting the object resize in the two cameras to be 100 x 100, respectively carrying out surf conversion on R, G, B channels of images after resize, respectively carrying out feature extraction on R, G, B channel images after surf conversion by using a kernel with the size of 5 x 5 and the step length of 5, finally extracting 1200 features from the images by each object, describing each object by 1203 features together with the pixel coordinates and the distance of the object in the images, combining the object ① in the camera 1 and all the objects in the camera 2 in pairs after the feature extraction is finished, comparing the similarity of each combination by using a random algorithm (RF), taking one group with the highest similarity as the same object, and analogizing to finish the matching of the same object in the two cameras, and finally, respectively carrying out steps S3 and S4 calculation on the luggage trolleys after the matching is finished, so that the indoor positioning of the plurality of luggage trolleys can be finished.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (2)

1. An indoor positioning method using cross-vision, comprising the steps of:
s1, deploying the system;
s2, measuring coordinates;
s3, detecting a Yolo-v3 target, firstly matching the targets if the targets are multiple, then positioning the Landmark, and directly positioning the Landmark if the targets are not multiple;
s4, calculating the distance between the target object and each camera;
and S5, completing the positioning of the target object by calculation.
2. An indoor positioning method using cross-vision as claimed in claim 1, characterized in that based on a positioning system, the positioning system includes
Two high-definition cameras;
the spacing distance between the two cameras, the installation height and the angle between the cameras and the ground visual angle are known;
and one computer host is used for target detection, matching and positioning.
CN202010044955.2A 2020-01-16 2020-01-16 Indoor positioning method using cross vision Pending CN111157007A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010044955.2A CN111157007A (en) 2020-01-16 2020-01-16 Indoor positioning method using cross vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010044955.2A CN111157007A (en) 2020-01-16 2020-01-16 Indoor positioning method using cross vision

Publications (1)

Publication Number Publication Date
CN111157007A true CN111157007A (en) 2020-05-15

Family

ID=70563268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010044955.2A Pending CN111157007A (en) 2020-01-16 2020-01-16 Indoor positioning method using cross vision

Country Status (1)

Country Link
CN (1) CN111157007A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103344213A (en) * 2013-06-28 2013-10-09 三星电子(中国)研发中心 Method and device for measuring distance of double-camera
CN103557834A (en) * 2013-11-20 2014-02-05 无锡儒安科技有限公司 Dual-camera-based solid positioning method
CN103630112A (en) * 2013-12-03 2014-03-12 青岛海尔软件有限公司 Method for achieving target positioning through double cameras
CN106878949A (en) * 2017-02-27 2017-06-20 努比亚技术有限公司 A kind of positioning terminal based on dual camera, system and method
CN107449432A (en) * 2016-05-31 2017-12-08 华为终端(东莞)有限公司 One kind utilizes dual camera air navigation aid, device and terminal
CN109376603A (en) * 2018-09-25 2019-02-22 北京周同科技有限公司 A kind of video frequency identifying method, device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103344213A (en) * 2013-06-28 2013-10-09 三星电子(中国)研发中心 Method and device for measuring distance of double-camera
CN103557834A (en) * 2013-11-20 2014-02-05 无锡儒安科技有限公司 Dual-camera-based solid positioning method
CN103630112A (en) * 2013-12-03 2014-03-12 青岛海尔软件有限公司 Method for achieving target positioning through double cameras
CN107449432A (en) * 2016-05-31 2017-12-08 华为终端(东莞)有限公司 One kind utilizes dual camera air navigation aid, device and terminal
CN106878949A (en) * 2017-02-27 2017-06-20 努比亚技术有限公司 A kind of positioning terminal based on dual camera, system and method
CN109376603A (en) * 2018-09-25 2019-02-22 北京周同科技有限公司 A kind of video frequency identifying method, device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐德 等编著: "《机器人视觉测量与控制》", 31 January 2016, 国防工业出版社 *
杨露菁 等编著: "《智能图像处理及应用》", 31 March 2019, 中国铁道出版社 *

Similar Documents

Publication Publication Date Title
US10024965B2 (en) Generating 3-dimensional maps of a scene using passive and active measurements
CN110926474B (en) Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method
CN107265221B (en) Method and system for multiple 3D sensor calibration
CN103890606B (en) The method and system of map is created for using radar-optical imagery fusion
JP5844463B2 (en) Logo detection for indoor positioning
Acharya et al. BIM-Tracker: A model-based visual tracking approach for indoor localisation using a 3D building model
EP3617749B1 (en) Method and arrangement for sourcing of location information, generating and updating maps representing the location
Qu et al. Landmark based localization in urban environment
CN102176243A (en) Target ranging method based on visible light and infrared camera
CN101952688A (en) Method for map matching with sensor detected objects
US20220327737A1 (en) Determining position using computer vision, lidar, and trilateration
EP3447729B1 (en) 2d vehicle localizing using geoarcs
Bai et al. Stereovision based obstacle detection approach for mobile robot navigation
Li et al. Automatic parking slot detection based on around view monitor (AVM) systems
Tao et al. Automated processing of mobile mapping image sequences
Ke et al. Roadway surveillance video camera calibration using standard shipping container
Grejner-Brzezinska et al. From Mobile Mapping to Telegeoinformatics
KR20150008295A (en) User device locating method and apparatus for the same
Chenchen et al. A camera calibration method for obstacle distance measurement based on monocular vision
US20130329944A1 (en) Tracking aircraft in a taxi area
CN111157007A (en) Indoor positioning method using cross vision
Jiang et al. Precise vehicle ego-localization using feature matching of pavement images
WO2020244467A1 (en) Method and device for motion state estimation
Hanel et al. Metric scale calculation for visual mapping algorithms
Wei Multi-sources fusion based vehicle localization in urban environments under a loosely coupled probabilistic framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200515

RJ01 Rejection of invention patent application after publication