CN112504263A - Indoor navigation positioning device based on multi-view vision and positioning method thereof - Google Patents

Indoor navigation positioning device based on multi-view vision and positioning method thereof Download PDF

Info

Publication number
CN112504263A
CN112504263A CN202011329957.2A CN202011329957A CN112504263A CN 112504263 A CN112504263 A CN 112504263A CN 202011329957 A CN202011329957 A CN 202011329957A CN 112504263 A CN112504263 A CN 112504263A
Authority
CN
China
Prior art keywords
robot
monocular
coordinate system
camera
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011329957.2A
Other languages
Chinese (zh)
Inventor
王纪武
刘伟
戴波
杨历
原雪纯
褚文杰
裴欣
韩晓
许钧翔
严晨
韩硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN202011329957.2A priority Critical patent/CN112504263A/en
Publication of CN112504263A publication Critical patent/CN112504263A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides an indoor navigation positioning device based on multi-view vision and a positioning method thereof. The L-shaped linear light source is arranged on the robot. The plurality of monocular cameras are located above the robot, the view field ranges of two adjacent monocular cameras are crossed, and the total view field range of the plurality of monocular cameras is not smaller than the walking range of the robot. The control system comprises a vision processing system and a signal transmission system, the vision processing system is in communication connection with the monocular cameras, the signal transmission system comprises an upper computer and a lower computer, the upper computer is in communication connection with the vision processing system, and the lower computer is arranged on the robot and is in communication connection with the upper computer and the robot. In the application, the positioning process of the positioning device and the positioning method is not interfered by the external environment and is not limited by the positioning area, so that the positioning precision is improved, and the positioning cost is reduced.

Description

Indoor navigation positioning device based on multi-view vision and positioning method thereof
Technical Field
The invention relates to the technical field of robot navigation and positioning, in particular to an indoor navigation and positioning device based on multi-view vision and a positioning method thereof.
Background
Positioning is to determine the position of a target object, and positioning technologies can be divided into outdoor positioning and indoor positioning according to different environments. Nowadays, outdoor positioning technologies such as GPS in the united states, GLONASS in russia, GALILEO in the european union, and beidou satellite navigation system in china are mature enough to satisfy positioning in most outdoor environments. However, under indoor conditions, the number of obstacles is large, the environment is complex and even multidimensional, so that once the outdoor positioning technologies are applied to indoor scene conditions, the positioning accuracy is greatly reduced due to satellite signal attenuation, and the technologies cannot be directly applied to indoor.
At present, how to obtain position information in a complex indoor scene becomes a research hotspot nowadays, and a batch of solutions of special equipment represented by infrared positioning, ultrasonic positioning, WIFI signal positioning, ultra-wideband positioning and radio frequency identification positioning and solutions based on geomagnetic positioning emerge. However, the solutions based on the dedicated device and the geomagnetic positioning have problems of being easily interfered, limited positioning area, high laying cost, and the like.
Disclosure of Invention
In view of the problems in the background art, an object of the present invention is to provide an indoor navigation positioning apparatus and a positioning method thereof based on multi-view vision, wherein the positioning process is not interfered by the external environment and is not limited by the positioning area, thereby improving the positioning accuracy and reducing the positioning cost.
In order to achieve the above object, the present invention provides an indoor navigation and positioning device based on multi-view vision, which includes: the robot comprises a robot, an L-shaped line light source, a plurality of monocular cameras and a control system. The L-shaped linear light source is arranged on the robot. The plurality of monocular cameras are located above the robot, the view field ranges of two adjacent monocular cameras are crossed, and the total view field range of the plurality of monocular cameras is not smaller than the walking range of the robot. The control system comprises a vision processing system and a signal transmission system, the vision processing system is in communication connection with the monocular cameras, the signal transmission system comprises an upper computer and a lower computer, the upper computer is in communication connection with the vision processing system, and the lower computer is arranged on the robot and is in communication connection with the upper computer and the robot.
In the binocular vision based indoor navigation and positioning apparatus according to some embodiments, the binocular vision based indoor navigation and positioning apparatus further comprises a mounting bracket which fixedly mounts the plurality of monocular cameras.
The invention also provides a positioning method of the indoor navigation positioning device based on the multi-view vision, which is realized by the indoor navigation positioning device based on the multi-view vision. Wherein the positioning method comprises steps S1-S7.
S1, numbering the monocular cameras, and establishing a camera coordinate system O of each monocular camera2a-XaYaZaA pixel coordinate system O corresponding to each monocular camera1a-UaVaAnd a world coordinate system O-XYZ in the indoor scene, where a is the camera number. And S2, acquiring initial images of indoor scenes by using the monocular cameras, acquiring initial image data of all the monocular cameras through the vision processing system, and splicing all the initial image data through the upper computer to obtain a two-dimensional panoramic map. And S3, manually planning a target motion track of the robot on the two-dimensional panoramic map through the upper computer, wherein the target motion track is formed by a series of planning points on the two-dimensional panoramic map. And S4, calculating the coordinates of a series of planning points on the two-dimensional panoramic map in a world coordinate system. S5, placing the robot in the indoor scene, selecting the robot as a tracking target through the upper computer, carrying out target tracking on the robot and obtaining the real-time position of the robot under the pixel coordinate system by the upper computer in the moving process of the robot, and then calculating the real-time position of the robot under the world coordinate system. And S6, calculating the real-time posture of the robot at the real-time position in the step S5 based on the position of the L-shaped line light source on the robot in the real-time image acquired by the monocular camera, wherein the real-time position and the real-time posture of the robot in the world coordinate system are the real-time posture of the robot. And S7, the upper computer compares the real-time pose of the mobile robot with the target motion track and outputs a walking control signal to the lower computer, the lower computer transmits the received walking control signal to the robot, and the robot completes a walking instruction based on the walking control signal and finally reaches a planned destination.
In the positioning method of the indoor navigation positioning device based on the multi-view vision according to some embodiments, in step S4, the calculation process of the arbitrary planning point on the two-dimensional panoramic map includes the steps of: s41, reading the coordinate (u) of the planning point in the pixel coordinate system1,v1) (ii) a S42, two adjacent monocular cameras are selected from all the monocular cameras which acquire the planning point, the original points of the camera coordinate systems of the two adjacent monocular cameras are projected into a world coordinate system, and the coordinate P of the projection point of the original points of the camera coordinate systems of the two adjacent monocular cameras is obtained1(x1,y1) And P2(x2,y2) (ii) a S43, calculating coordinates (x, y) of the planning point in the world coordinate system, wherein the calculation formula is:
Figure BDA0002795467130000031
Figure BDA0002795467130000032
wherein f isaxIs a monocular camera edge UaNormalized focal length of axis, fayFor monocular camera edge VaNormalized focal length of axis, caxU being the optical center of monocular cameraaAxis coordinate, cayV being the optical center of monocular cameraaAxial coordinate, ZcThe vertical distance between the monocular camera and the plane where the robot is located.
In the positioning method of the indoor navigation positioning device based on multi-view vision according to some embodiments, in step S5, the calculation process of the real-time position of the robot in the world coordinate system at any time comprises the following steps: s51, reading out the current coordinate (u) of the robot in the pixel coordinate system1',v1') to a host; s52, two adjacent monocular cameras are selected from all the monocular cameras which acquire the robot, the original points of the camera coordinate systems of the two adjacent monocular cameras are projected into a world coordinate system, and the coordinate P of the projection point of the original points of the camera coordinate systems of the two adjacent monocular cameras is obtained1'(x1',y1') and P2'(x2',y2') to a host; s53, calculating the coordinates (x ', y') of the robot in the world coordinate system, and the calculation formula is:
Figure BDA0002795467130000033
Figure BDA0002795467130000034
wherein f isaxIs a monocular camera edge UaNormalized focal length of axis, fayFor monocular camera edge VaNormalized focal length of axis, caxU being the optical center of monocular cameraaAxis coordinate, cayV being the optical center of monocular cameraaAxial coordinate, ZcThe vertical distance between the monocular camera and the plane where the robot is located.
In the positioning method of the multi-vision based indoor navigation positioning device according to some embodiments, in step S6, the calculation process of the real-time pose of the robot at the real-time position in step S5 includes the steps of: s61, selecting a segment AB on the L-shaped line source as a target segment, and respectively reading the coordinates of an endpoint A, B of the segment AB under a pixel coordinate system; s62, by
Figure BDA0002795467130000035
Determining a rotation direction of the robot, wherein
Figure BDA0002795467130000036
Is the direction vector of the line segment AB currently under the pixel coordinate system,
Figure BDA0002795467130000037
the direction vector of the line segment AB at the previous moment in the pixel coordinate system is shown; s63, by
Figure BDA0002795467130000041
The rotation angle θ of the robot is determined.
In the positioning method of the indoor navigation positioning device based on the multi-vision according to some embodiments, the L-shaped linear light source comprises a long line segment and a short line segment, and the line segment AB is the long line segment or the short line segment on the L-shaped linear light source.
The invention has the following beneficial effects:
in the indoor navigation positioning device and the positioning method based on the multi-view vision, the positioning process of the positioning device and the positioning method is not interfered by the external environment and is not limited by the positioning area, so that the positioning precision is improved, and the positioning cost is reduced. In addition, the positioning range realized by the positioning device and the positioning method can realize elastic adjustment along with the flexible deployment of the monocular camera. In addition, the positioning device and the positioning method are suitable for occasions with high requirements on automation degree and high efficiency of machine operation, and can effectively avoid the influence of personnel intervention on production safety and operation efficiency.
Drawings
Fig. 1 is a schematic structural diagram of an indoor navigation and positioning device based on multi-vision of the present invention.
Fig. 2 is a view illustrating the range of the field of view of a plurality of monocular cameras according to the present invention.
FIG. 3 is a schematic diagram of the position of the L-shaped line light source at two different times in the present invention.
Fig. 4 is a relationship diagram of three types of cartesian coordinate systems in the present invention.
Fig. 5 is a schematic block diagram of a positioning method of the indoor navigation positioning device based on multi-vision of the invention.
Wherein the reference numerals are as follows:
1 robot 41 vision processing system
2L-shaped line source 42 signal transmission system
21 long line 421 upper computer
22 short line 422 lower machine
3 monocular camera 5 installing support
4 control system S running range
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments. In addition, "a plurality" appearing in the present application means two or more (including two).
The indoor navigation positioning device based on multi-view vision and the positioning method thereof according to the present application are described in detail below with reference to the accompanying drawings.
Referring to fig. 1 to 4, the indoor navigation and positioning device based on multi-view vision of the present application includes a robot 1, an L-shaped line light source 2, a plurality of monocular cameras 3, and a control system 4.
The robot 1 (provided with a control program) is connected to the control system 4 in a communication manner, and the robot 1 finishes walking under the control action of the control system 4. In some embodiments, the robot 1 may be a mobile cart or a mobile manipulator.
An L-shaped linear light source 2 is provided on the robot 1. During the positioning process of the indoor navigation positioning device based on the multi-view vision, the L-shaped line light source 2 clearly presents as an L-shaped broken line segment in the image acquired by the monocular camera 3.
In some embodiments, referring to fig. 2, the L-shaped linear light source 2 includes a long line segment 21 and a short line segment 22. Wherein, long line segment 21 and short line segment 22 all can be sharp linear light source, and L shape linear light source 2 forms by two sharp linear light source concatenations this moment, and the contained angle between two sharp linear light sources can carry out reasonable setting based on actual conditions.
Referring to fig. 1 and 2, the plurality of monocular cameras 3 are located above the robot 1, and the view field ranges of two adjacent monocular cameras 3 are crossed, so that the robot 1 can be photographed by at least two monocular cameras 3 at any time and at any position. Wherein the number and relative position of the monocular cameras 3 may be selectively set for different ranges of indoor scenes. In the positioning process of the indoor navigation positioning device based on the multi-view vision, in order to ensure that the plurality of monocular cameras 3 can always shoot the walking of the robot 1, the total view field range of the plurality of monocular cameras 3 is not less than the walking range S of the robot 1. It should be noted that, for the convenience of positioning calculation and the reduction of positioning error, the internal parameters of the monocular cameras 3 are all consistent.
In some embodiments, referring to fig. 1, control system 4 includes a vision processing system 41 and a signal transmission system 42. Wherein the vision processing system 41 is communicatively connected to the plurality of monocular cameras 3. The signal transmission system 42 includes an upper computer 421 and a lower computer 422, the upper computer 421 is communicatively connected to the vision processing system 41, and the lower computer 422 is disposed on the robot 1 and is communicatively connected to the upper computer 421 and the robot 1.
In some embodiments, referring to fig. 1, the indoor navigation and positioning device based on multi-view vision further includes a mounting bracket 5, the mounting bracket 5 fixedly mounts the plurality of monocular cameras 3 and ensures that the plurality of monocular cameras 3 are mounted at the same height, i.e. all monocular cameras 3 are perpendicular to the plane of the robot 1 (i.e. Z is described below)c) Are all equal.
In the indoor navigation positioning device based on the multi-view vision, the plurality of monocular cameras 3 are used for acquiring the real-time position of the robot 1; the vision processing system 41 acquires the image data acquired by the plurality of monocular cameras 3 based on the communication connection with the plurality of monocular cameras 3; the upper computer 421 of the signal transmission system 42 splices the image data based on the communication connection with the vision processing system 41 to obtain a two-dimensional panoramic map of an indoor scene, and performs motion trajectory planning, target tracking, pose coordinate conversion and walking control signal output to the lower computer 422 on the robot 1; the lower computer 422 transmits the received walking control signal to the robot 1, and the robot 1 completes a walking instruction based on the walking control signal and finally reaches a planned destination, so that indoor navigation and positioning are realized. The indoor navigation positioning device based on the multi-view vision is simple in structure and convenient to operate, cannot be interfered by an external environment in the positioning process, and is not limited by a positioning area (namely, the application range is wide), so that the positioning precision is improved, and the positioning cost is reduced. And based on the flexible deployability of the number and the positions of the monocular cameras 3, the indoor navigation positioning device based on the monocular vision has high transportability, low transplantation cost and a positioning range capable of realizing flexible adjustment along with the flexible deployment of the cameras. In addition, this application positioner is applicable to degree of automation and requires the efficient occasion (especially is applicable to the occasion of robot long term reciprocating motion indoor), and can effectively avoid personnel to intervene the influence to production safety and operating efficiency.
The positioning method of the indoor navigation positioning device based on the multi-view vision is implemented by using the indoor navigation positioning device based on the multi-view vision, and referring to fig. 1 to 5, the positioning method of the indoor navigation positioning device based on the multi-view vision includes steps S1-S7.
S1, numbering the monocular cameras 3, and establishing a camera coordinate system O of each monocular camera 32a-XaYaZaA pixel coordinate system O corresponding to each monocular camera 31a-UaVaAnd a world coordinate system O-XYZ in the indoor scene, where a is the camera number. Wherein, referring to fig. 4, X of the camera coordinate system of each monocular camera 3aAxis, and U of pixel coordinate system corresponding to each monocular camera 3aThe axis and the Y axis of the world coordinate system are parallel to each other; y of the camera coordinate system of each monocular camera 3aAxis, and V of pixel coordinate system corresponding to each monocular camera 3aThe axis and the X axis of the world coordinate system are parallel to each other; of monocular cameras 3Z of camera coordinate systemaThe axis and the Z axis of the world coordinate system are parallel to each other.
S2, acquiring an initial image of an indoor scene by using the plurality of monocular cameras 3, acquiring initial image data of all the monocular cameras 3 through the vision processing system 41, and stitching all the initial image data through the upper computer 421 to obtain a two-dimensional panoramic map.
And S3, manually planning a target motion track of the robot 1 on the two-dimensional panoramic map through the upper computer 421, wherein the target motion track is formed by a series of planning points on the two-dimensional panoramic map.
And S4, calculating coordinates of a series of planning points on the two-dimensional panoramic map in a world coordinate system (namely coordinate conversion of the planning points between a pixel coordinate system and the world coordinate system).
S5, placing the robot 1 in the indoor scene, selecting the robot 1 as a tracking target through the upper computer 421, performing target tracking on the robot 1 through the upper computer 421 in the moving process of the robot 1, obtaining the real-time position of the robot 1 under the pixel coordinate system, and then calculating the real-time position of the robot 1 under the world coordinate system (namely coordinate conversion of the real-time position of the robot 1 between the pixel coordinate system and the world coordinate system).
And S6, calculating the real-time posture of the robot 1 at the real-time position in the step S5 based on the position of the L-shaped line light source 2 on the robot 1 in the real-time image acquired by the monocular camera 3, wherein the real-time position and the real-time posture of the robot 1 in the world coordinate system are the real-time posture of the robot 1.
S7, the upper computer 421 compares the real-time pose of the mobile robot 1 with the target motion trajectory, and outputs a walking control signal to the lower computer 422, the lower computer 422 transmits the received walking control signal to the robot 1, and the robot 1 completes a walking instruction based on the walking control signal and finally reaches the planned destination.
In the positioning method of the indoor navigation positioning device based on the multi-view vision, the positioning process of the positioning method is not interfered by the external environment and is not limited by the positioning area, so that the positioning precision is improved, and the positioning cost is reduced. In addition, the positioning range realized by the positioning method can realize elastic adjustment along with the flexible deployment of the monocular camera. In addition, the positioning method is suitable for occasions with high automation degree requirements and high machine operation requirements (particularly suitable for occasions with long-term reciprocating motion of the robot indoors), and can effectively avoid the influence of personnel intervention on production safety and operation efficiency.
In one embodiment, referring to fig. 1, in step S4, the calculation process of the arbitrary planning point on the two-dimensional panoramic map includes steps S41-S43.
S41, reading the coordinate (u) of the planning point in the pixel coordinate system1,v1). It should be noted that the initial images acquired by the at least two monocular cameras 3 contain the planning point, and the coordinates of the planning point in the pixel coordinate systems corresponding to the at least two monocular cameras 3 are consistent.
S42, selecting two adjacent monocular cameras 3 from all the monocular cameras 3 that have collected the planning point, projecting the origins of the camera coordinate systems of the two adjacent monocular cameras 3 that have collected the planning point into the world coordinate system, respectively, and obtaining the coordinates P of the projection points of the origins of the camera coordinate systems of the two adjacent monocular cameras 31(x1,y1) And P2(x2,y2)。
S43, calculating coordinates (x, y) of the planning point in the world coordinate system, wherein the calculation formula is:
Figure BDA0002795467130000081
Figure BDA0002795467130000082
wherein f isaxIs a monocular camera 3 edge UaNormalized focal length of axis, fayFor monocular camera 3 edge VaNormalized focal length of axis, caxU being the optical center of monocular camera 3aAxis coordinate, cayV being the optical center of monocular camera 3aAxial coordinate, ZcIs the vertical distance between the monocular camera 3 and the plane where the robot 1 is located.
In order to facilitate the positioning calculation and reduce the positioning error, the internal parameters of the monocular cameras 3 are consistent, while for any type of monocular camera, the internal parameter parameters are fixed, and f may be different1x=f2x=…=fx,f1y=f2y=…fy,c1x=c2x=…=cx,c1y=c2y=…=cyThen, the above calculation formula is:
Figure BDA0002795467130000083
Figure BDA0002795467130000091
in one embodiment, in step S5, the calculation process of the real-time position of the robot 1 at any time in the world coordinate system includes steps S51-S53.
S51, the coordinates (u) of the robot 1 in the pixel coordinate system are read1',v1'). It should be noted that, during the moving process of the robot 1, at any time, the at least two monocular cameras 3 acquire the current position of the robot 1, and the coordinates of the robot 1 currently under the pixel coordinate systems corresponding to the at least two monocular cameras 3 are all consistent.
S52, two adjacent monocular cameras 3 are selected from all the monocular cameras 3 which acquire the current position of the robot 1, the original points of the camera coordinate systems of the two adjacent monocular cameras 3 which acquire the current position of the robot 1 are projected into a world coordinate system, and the coordinates P of the projection points of the original points of the camera coordinate systems of the two adjacent monocular cameras 3 are obtained1'(x1',y1') and P2'(x2',y2')。
S53, calculating the coordinates (x ', y') of the robot 1 in the world coordinate system, and the calculation formula is:
Figure BDA0002795467130000092
Figure BDA0002795467130000093
wherein f isaxIs a monocular camera 3 edge UaNormalized focal length of axis, fayFor monocular camera 3 edge VaNormalized focal length of axis, caxU being the optical center of monocular camera 3aAxis coordinate, cayV being the optical center of monocular camera 3aAxial coordinate, ZcIs the vertical distance between the monocular camera 3 and the plane where the robot 1 is located.
Similarly, for the convenience of positioning calculation and reducing positioning error, the internal parameters of the monocular cameras 3 are all consistent, while for any type of monocular camera, the internal parameters are fixed, and f may be different1x=f2x=…=fx,f1y=f2y=…fy,c1x=c2x=…=cx,c1y=c2y=…=cyThen, the above calculation formula is:
Figure BDA0002795467130000094
Figure BDA0002795467130000095
in an embodiment, in step S6, the calculation process of the real-time pose (i.e., orientation) of the robot 1 at the real-time position in step S5 includes steps S61-S63.
S61, select the segment AB on the L-shaped line source 2 as the target segment, and respectively read the coordinates of the end point A, B of the segment AB currently under the pixel coordinate system. The monocular camera 3 that has acquired the current position of the robot 1 acquires the line segment AB at the same time, and the coordinates of the end point A, B of the line segment AB can be directly read out from the pixel coordinate system corresponding to the monocular camera 3 that has acquired the line segment AB. The end point A, B of the line segment AB is consistent in the coordinates of the pixel coordinate systems corresponding to all the monocular cameras 3 that have acquired the line segment AB.
S62, by
Figure BDA0002795467130000101
It is determined in which direction the robot 1 is rotating (i.e. whether the robot 1 is rotating clockwise or counterclockwise), wherein
Figure BDA0002795467130000102
Is the direction vector of the line segment AB in the pixel coordinate system at the current moment,
Figure BDA0002795467130000103
is the direction vector of the pixel coordinate system at the moment on the segment AB.
S63, by
Figure BDA0002795467130000104
The rotation angle θ of the robot 1 is determined.
In one embodiment, in step S6, the line segment AB may be a long line segment 21 or a short line segment 22 on the L-shaped linear light source 2. In the whole positioning calculation process, the long line segment 21 and the short line segment 22 of the L-shaped linear light source 2 are different in length, so that the line segment AB can be always the same line segment at any two adjacent moments, and the accuracy of judging the rotation angle theta of the robot 1 is improved.

Claims (7)

1. An indoor navigation positioning device based on multi-view vision is characterized by comprising a robot (1), an L-shaped line light source (2), a plurality of monocular cameras (3) and a control system (4);
the L-shaped linear light source (2) is arranged on the robot (1);
the monocular cameras (3) are positioned above the robot (1), the view field ranges of two adjacent monocular cameras (3) are crossed, and the total view field range of the monocular cameras (3) is not smaller than the walking range (S) of the robot (1);
control system (4) include vision processing system (41) and signal transmission system (42), vision processing system (41) communication connect in a plurality of monocular cameras (3), signal transmission system (42) include host computer (421) and next machine (422), host computer (421) communication connection in vision processing system (41), next machine (422) set up on robot (1) and communication connection in host computer (421) and robot (1).
2. The indoor visual sense-based navigation and positioning device according to claim 1, further comprising a mounting bracket (5), wherein the mounting bracket (5) fixedly mounts the plurality of monocular cameras (3).
3. A positioning method of an indoor navigation positioning device based on multi-vision, characterized in that the positioning method is realized by the indoor navigation positioning device based on multi-vision of claim 1, and the positioning method comprises the following steps:
s1, numbering the monocular cameras (3) and establishing a camera coordinate system O of each monocular camera (3)2a-XaYaZaA pixel coordinate system O corresponding to each monocular camera (3)1a-UaVaAnd a world coordinate system O-XYZ in the indoor scene, wherein a is a camera number;
s2, acquiring initial images of indoor scenes by using the monocular cameras (3), acquiring initial image data of all the monocular cameras (3) through the vision processing system (41), and splicing all the initial image data through the upper computer (421) to obtain a two-dimensional panoramic map;
s3, manually planning a target motion track of the robot (1) on the two-dimensional panoramic map through an upper computer (421), wherein the target motion track is formed by a series of planning points on the two-dimensional panoramic map;
s4, calculating coordinates of a series of planning points on the two-dimensional panoramic map in a world coordinate system;
s5, placing the robot (1) in the indoor scene, selecting the robot (1) as a tracking target through an upper computer (421), in the moving process of the robot (1), carrying out target tracking on the robot (1) through the upper computer (421), obtaining the real-time position of the robot (1) under a pixel coordinate system, and then calculating the real-time position of the robot (1) under a world coordinate system;
s6, calculating the real-time posture of the robot (1) at the real-time position in the step S5 based on the position of the L-shaped line light source (2) on the robot (1) in the real-time image acquired by the monocular camera (3), wherein the real-time position and the real-time posture of the robot (1) in a world coordinate system are the real-time posture of the robot (1);
s7, the upper computer (421) compares the real-time pose of the mobile robot (1) with the target motion track and outputs a walking control signal to the lower computer (422), the lower computer (422) transmits the received walking control signal to the robot (1), and the robot (1) completes a walking instruction based on the walking control signal and finally reaches a planned destination.
4. The positioning method of the indoor navigation and positioning device based on multi-view vision as claimed in claim 3, wherein in step S4, the calculation process of any planned point on the two-dimensional panoramic map includes the steps of:
s41, reading the coordinate (u) of the planning point in the pixel coordinate system1,v1);
S42, selecting two adjacent monocular cameras (3) from all the monocular cameras (3) acquiring the planning point, projecting the original points of the camera coordinate systems of the two adjacent monocular cameras (3) into a world coordinate system, and obtaining the coordinate P of the projection point of the original points of the camera coordinate systems of the two adjacent monocular cameras (3)1(x1,y1) And P2(x2,y2);
S43, calculating coordinates (x, y) of the planning point in the world coordinate system, wherein the calculation formula is:
Figure FDA0002795467120000021
Figure FDA0002795467120000022
wherein f isaxIs a monocular camera (3) along UaNormalized focal length of axis, fayIs a monocular camera (3) along VaNormalized focal length of axis, caxIs a U of the optical center of the monocular camera (3)aAxis coordinate, cayV being the optical center of the monocular camera (3)aAxial coordinate, ZcThe vertical distance between the monocular camera (3) and the plane where the robot (1) is located.
5. The positioning method of the indoor navigation and positioning device based on multi-view vision as claimed in claim 3, wherein the step S5 is implemented by calculating the real-time position of the robot (1) in the world coordinate system at any time, and the step S5 comprises the steps of:
s51, reading the current coordinate (u) of the robot (1) in the pixel coordinate system1',v1');
S52, two adjacent monocular cameras (3) are selected from all the monocular cameras (3) collected to the robot (1), the original points of the camera coordinate systems of the two adjacent monocular cameras (3) are projected to a world coordinate system, and the coordinates P of the projection points of the original points of the camera coordinate systems of the two adjacent monocular cameras (3) are obtained1'(x1',y1') and P2'(x2',y2');
S53, calculating the coordinates (x ', y') of the robot (1) under the world coordinate system, wherein the calculation formula is as follows:
Figure FDA0002795467120000031
Figure FDA0002795467120000032
wherein f isaxIs a monocular camera (3) along UaNormalized focal length of axis, fayIs a monocular camera (3) along VaNormalized focal length of axis, caxIs a U of the optical center of the monocular camera (3)aAxis coordinate, cayV being the optical center of the monocular camera (3)aAxial coordinate, ZcThe vertical distance between the monocular camera (3) and the plane where the robot (1) is located.
6. The positioning method of the indoor navigation and positioning device based on multi-view vision as claimed in claim 3, wherein in step S6, the calculation process of the real-time pose of the robot (1) at the real-time position in step S5 comprises the steps of:
s61, selecting a segment AB on the L-shaped linear light source (2) as a target segment, and respectively reading the coordinates of an endpoint A, B of the segment AB under the current pixel coordinate system;
s62, by
Figure FDA0002795467120000033
Determining the direction of rotation of the robot (1), wherein
Figure FDA0002795467120000034
Is the direction vector of the line segment AB currently under the pixel coordinate system,
Figure FDA0002795467120000035
the direction vector of the line segment AB at the previous moment in the pixel coordinate system is shown;
s63, by
Figure FDA0002795467120000036
The rotation angle theta of the robot (1) is determined.
7. The positioning method of the indoor navigation positioning device based on the multi-vision as claimed in claim 6, wherein the L-shaped linear light source (2) comprises a long line segment (21) and a short line segment (22), and the line segment AB is the long line segment (21) or the short line segment (22) on the L-shaped linear light source (2).
CN202011329957.2A 2020-11-24 2020-11-24 Indoor navigation positioning device based on multi-view vision and positioning method thereof Pending CN112504263A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011329957.2A CN112504263A (en) 2020-11-24 2020-11-24 Indoor navigation positioning device based on multi-view vision and positioning method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011329957.2A CN112504263A (en) 2020-11-24 2020-11-24 Indoor navigation positioning device based on multi-view vision and positioning method thereof

Publications (1)

Publication Number Publication Date
CN112504263A true CN112504263A (en) 2021-03-16

Family

ID=74959711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011329957.2A Pending CN112504263A (en) 2020-11-24 2020-11-24 Indoor navigation positioning device based on multi-view vision and positioning method thereof

Country Status (1)

Country Link
CN (1) CN112504263A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114298398A (en) * 2021-12-24 2022-04-08 北京交通大学 High-speed train dynamic tracking operation optimization method based on elastic adjustment strategy
WO2022199325A1 (en) * 2021-03-24 2022-09-29 International Business Machines Corporation Robotic geometric camera calibration and monitoring alert configuration and testing

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217439A (en) * 2014-09-26 2014-12-17 南京工程学院 Indoor visual positioning system and method
CN106527426A (en) * 2016-10-17 2017-03-22 江苏大学 Indoor multi-target track planning system and method
CN106843224A (en) * 2017-03-15 2017-06-13 广东工业大学 A kind of method and device of multi-vision visual positioning collaboration guiding transport vehicle
CN108469254A (en) * 2018-03-21 2018-08-31 南昌航空大学 A kind of more visual measuring system overall calibration methods of big visual field being suitable for looking up and overlooking pose
US20180330175A1 (en) * 2017-05-10 2018-11-15 Fotonation Limited Multi-camera vision system and method of monitoring
CN109506642A (en) * 2018-10-09 2019-03-22 浙江大学 A kind of robot polyphaser vision inertia real-time location method and device
CN110118528A (en) * 2019-04-29 2019-08-13 天津大学 A kind of line-structured light scaling method based on chessboard target
CN110136205A (en) * 2019-04-12 2019-08-16 广州极飞科技有限公司 The disparity adjustment method, apparatus and system of more mesh cameras
CN111445531A (en) * 2020-03-24 2020-07-24 云南电网有限责任公司楚雄供电局 Multi-view camera navigation method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217439A (en) * 2014-09-26 2014-12-17 南京工程学院 Indoor visual positioning system and method
CN106527426A (en) * 2016-10-17 2017-03-22 江苏大学 Indoor multi-target track planning system and method
CN106843224A (en) * 2017-03-15 2017-06-13 广东工业大学 A kind of method and device of multi-vision visual positioning collaboration guiding transport vehicle
US20180330175A1 (en) * 2017-05-10 2018-11-15 Fotonation Limited Multi-camera vision system and method of monitoring
CN108469254A (en) * 2018-03-21 2018-08-31 南昌航空大学 A kind of more visual measuring system overall calibration methods of big visual field being suitable for looking up and overlooking pose
CN109506642A (en) * 2018-10-09 2019-03-22 浙江大学 A kind of robot polyphaser vision inertia real-time location method and device
CN110136205A (en) * 2019-04-12 2019-08-16 广州极飞科技有限公司 The disparity adjustment method, apparatus and system of more mesh cameras
CN110118528A (en) * 2019-04-29 2019-08-13 天津大学 A kind of line-structured light scaling method based on chessboard target
CN111445531A (en) * 2020-03-24 2020-07-24 云南电网有限责任公司楚雄供电局 Multi-view camera navigation method, device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022199325A1 (en) * 2021-03-24 2022-09-29 International Business Machines Corporation Robotic geometric camera calibration and monitoring alert configuration and testing
US11738464B2 (en) 2021-03-24 2023-08-29 International Business Machines Corporation Robotic geometric camera calibration and monitoring alert configuration and testing
CN114298398A (en) * 2021-12-24 2022-04-08 北京交通大学 High-speed train dynamic tracking operation optimization method based on elastic adjustment strategy

Similar Documents

Publication Publication Date Title
CN109387186B (en) Surveying and mapping information acquisition method and device, electronic equipment and storage medium
CN110262507B (en) Camera array robot positioning method and device based on 5G communication
CN112470092B (en) Surveying and mapping system, surveying and mapping method, device, equipment and medium
CN111436208B (en) Planning method and device for mapping sampling points, control terminal and storage medium
CN112469967B (en) Mapping system, mapping method, mapping device, mapping apparatus, and recording medium
CN112504263A (en) Indoor navigation positioning device based on multi-view vision and positioning method thereof
WO2020063058A1 (en) Calibration method for multi-degree-of-freedom movable vision system
CN106352871A (en) Indoor visual positioning system and method based on artificial ceiling beacon
CN106370160A (en) Robot indoor positioning system and method
CN111780715A (en) Visual ranging method
JP5019478B2 (en) Marker automatic registration method and system
CN114071008A (en) Image acquisition device and image acquisition method
CN106468539A (en) Method and apparatus for generating geographical coordinate
JP7220784B2 (en) Survey sampling point planning method, device, control terminal and storage medium
JP2011174799A (en) Photographing route calculation device
US20120002044A1 (en) Method and System for Implementing a Three-Dimension Positioning
CN115046531A (en) Pole tower measuring method based on unmanned aerial vehicle, electronic platform and storage medium
WO2022052409A1 (en) Automatic control method and system for multi-camera filming
CN111103899A (en) Holder positioning method and device
US11310423B2 (en) Image capturing method and image capturing apparatus
CN116228888B (en) Conversion method and system for geographic coordinates and PTZ camera coordinates
CN111868656B (en) Operation control system, operation control method, device, equipment and medium
CN117103295A (en) Pose determining method and building robot
WO2022078444A1 (en) Program control method for 3d information acquisition
CN112304250B (en) Three-dimensional matching equipment and method between moving objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210316

RJ01 Rejection of invention patent application after publication