CN108534757B - Cloud-based visual map scale detection method and device - Google Patents

Cloud-based visual map scale detection method and device Download PDF

Info

Publication number
CN108534757B
CN108534757B CN201711421833.5A CN201711421833A CN108534757B CN 108534757 B CN108534757 B CN 108534757B CN 201711421833 A CN201711421833 A CN 201711421833A CN 108534757 B CN108534757 B CN 108534757B
Authority
CN
China
Prior art keywords
information
position information
obtaining
abscissa
ordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711421833.5A
Other languages
Chinese (zh)
Other versions
CN108534757A (en
Inventor
王超鹏
廉士国
林义闽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Beijing Technologies Co Ltd
Original Assignee
Cloudminds Beijing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Beijing Technologies Co Ltd filed Critical Cloudminds Beijing Technologies Co Ltd
Priority to CN201711421833.5A priority Critical patent/CN108534757B/en
Publication of CN108534757A publication Critical patent/CN108534757A/en
Application granted granted Critical
Publication of CN108534757B publication Critical patent/CN108534757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • G01C11/08Interpretation of pictures by comparison of two or more pictures of the same area the pictures not being supported in the same relative position as when they were taken
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a cloud-based visual map scale detection method, and relates to the field of visual navigation. The method comprises the following steps: acquiring an image signal, and acquiring moving position information of each frame in each step according to the image signal; acquiring an acceleration signal, and acquiring step length information of each step according to the acceleration signal; and obtaining the scale information corresponding to the visual map to be established according to the mobile position information and the step length information. The invention further provides a corresponding cloud-based visual map scale detection device, electronic equipment and a computer program product. According to the method, step length information of each step is acquired by combining an acceleration signal while image signals are acquired in the traditional mode, and errors of a scale in the visual mapping process are corrected by adding a scheme capable of providing scale information, so that more appropriate and accurate scale information is provided. The invention can provide accurate map information for visual navigation and positioning.

Description

Cloud-based visual map scale detection method and device
Technical Field
The invention relates to the field of visual navigation, in particular to a cloud-based visual map scale detection method and device.
Background
The camera has a cross-overlapping phenomenon between two consecutive frames during the motion process, that is, the two frames simultaneously observe some common scenes and feature points in the three-dimensional world. And these scene feature points will project on the 2D picture, through the alignment of picture or the matching of the feature, can find the corresponding relation of characteristic or batch on the picture before and after. Using the camera's imaging geometry (including camera parameters) and constraints, motion information (rotation matrix R and translation t) between two frames can be solved. Therefore, a series of relative change matrixes of the camera can be obtained, and the attitude information of the camera can be deduced.
In the monocular visual odometer, if only one unit of movement of the object in the x/y direction is known, the specific movement distance cannot be obtained due to lack of scale information, so that the estimated movement of the object has scale loss, and the application range is greatly limited.
Disclosure of Invention
The embodiment of the invention provides a cloud-based visual map scale detection method and device, and aims to solve the problem that map scale calculation errors are easy to occur due to lack of scale information in the process of visual map building of a monocular camera at present.
The embodiment of the invention provides a cloud-based visual map scale detection method in a first aspect, which comprises the following steps:
acquiring an image signal, and acquiring moving position information of each frame in each step according to the image signal;
acquiring an acceleration signal, and acquiring step length information of each step according to the acceleration signal;
and obtaining the scale information corresponding to the visual map to be established according to the mobile position information and the step length information.
A second aspect of the embodiments of the present invention provides a cloud-based visual map scale detection apparatus, where the apparatus includes: the system comprises a camera module, an IMU module, a visual odometer module and a processor;
the camera module is used for acquiring an image signal and sending the image signal to the visual odometer module;
the IMU module is used for acquiring an acceleration signal and sending the acceleration signal to the processor;
the visual odometer module is used for receiving the image signal sent by the camera module and acquiring the moving position information of each frame in each step according to the image signal;
the processor is configured with processor-executable operational instructions to perform operations comprising:
acquiring step length information of each step according to the acceleration signal;
and obtaining the scale information corresponding to the visual map to be established according to the mobile position information and the step length information.
A third aspect of an embodiment of the present invention provides an electronic device, including: a display, a memory, one or more processors; and one or more modules stored in the memory and configured to be executed by the one or more processors, the one or more modules including instructions for performing steps of the cloud-based visual map scale detection method of the first aspect.
A fourth aspect of embodiments of the present invention provides a computer program product, which includes a computer program stored on a non-volatile computer-readable storage medium, the computer program including program instructions, which, when executed by a computer, cause the computer to perform the steps of the cloud-based visual map scale detection method according to the first aspect.
According to the method, step length information of each step is acquired by combining an acceleration signal while image signals are acquired in the traditional mode, and errors of a scale in the visual mapping process are corrected by adding a scheme capable of providing scale information, so that more appropriate and accurate scale information is provided. The method can be applied to the indoor and outdoor visual map building process, and provides accurate map information for visual navigation and positioning.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a cloud-based visual map scale detection method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating obtaining the scale information corresponding to the visual map to be created according to the moving position information and the step length information according to the embodiment of the present invention;
fig. 3 is a schematic diagram of a cloud-based visual map scale detection apparatus according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions and advantages of the embodiments of the present invention more apparent, the following further detailed description of the exemplary embodiments of the present invention is provided with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and are not exhaustive of all the embodiments. It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The embodiment of the invention can be applied to the indoor and outdoor visual map building process and provides accurate map information for visual navigation and positioning.
Example 1
As shown in fig. 1, this embodiment provides a cloud-based visual map scale detection method, where the method includes:
s101, acquiring an image signal, and acquiring the moving position information of each frame in each step according to the image signal.
Specifically, the image signal obtained in the method of this embodiment is an image signal obtained by an existing monocular camera, and the position information of each frame in each step is obtained through the image signal. The location information may be obtained by a visual odometer.
And S102, acquiring an acceleration signal, and acquiring step length information of each step according to the acceleration signal.
Specifically, the method described in this embodiment aims to introduce a scheme capable of providing scale information to overcome the problem that map scale calculation errors easily occur in visual mapping of the current monocular camera. Therefore, the method described in this embodiment provides step size information to obtain scale information. And the basis for acquiring the step information is derived from the acceleration signal.
In the method of the present embodiment, the acceleration signal is derived from an inertial measurement module, i.e., an IMU module, disposed on the moving object. The IMU module can estimate the step length by acquiring the acceleration information generated when the moving object moves, so as to obtain the scale information.
It should be noted that, step S101 and step S102 in the method described in this embodiment do not have a strict order when acquiring signals, and may also be executed synchronously.
And S103, obtaining the scale information corresponding to the visual map to be established according to the moving position information and the step length information.
The method of this embodiment outputs coordinates (x) based on two points of visual odometer before and after each stepi-1,yi-1)、 (xi,yi) And step length diAnd mapping the coordinate points. And normalizing the scale according to the scale and the odometer information corresponding to each step, thereby providing scale information for drawing an accurate map.
Specifically, as shown in fig. 2, the process of obtaining the scale information corresponding to the visual map to be established according to the moving position information and the step length information includes:
s1031, obtaining a proportionality coefficient according to the moving position information and the step length information;
s1032, carrying out re-projection on the mobile position information according to the proportion coefficient to generate re-projection position information;
and S1033, calculating and obtaining the scale information corresponding to the visual map to be established according to the reprojection position information.
The above steps are described in detail below.
Firstly, acquiring an actual scale factor corresponding to each step, and acquiring a proper proportional coefficient for coordinate normalization processing.
For step i, the step size is diThe output of the odometer before and after the step is (x)i-1,yi-1)、(xi,yi) (ii) a The corresponding actual scale factor s during that step can be obtainedi
Figure BDA0001523215220000041
Then the left and right scale factor set S ═ S can be obtained in the process of drawing construction1,s2s3…sn-1,sn}. Find set S maximum, noted as Smax
smax=max{s1,s2s3…sn-1,sn}
For step i the actual scale factor s is scalediFor a maximum value smaxThe proportionality coefficient N is:
Figure BDA0001523215220000051
secondly, from step 1, the visual output of each step is remapped, i.e. reprojected, once to achieve scale normalization.
For the ith step, the corresponding proportionality coefficient is NiAnd all outputs of the visual odometer during this step are { p'1,p′2,p′3…p′kH, then from p'1Start once to p'2,p′3…p′kAnd carrying out re-projection.
Figure BDA0001523215220000052
Figure BDA0001523215220000053
Wherein, (x'i-1,y′i-1)、(x′i,y′i) Respectively representing the output of two adjacent frames of odometers in one step; theta'iRepresenting direction information of two adjacent frames; d'iRepresenting two adjacent frames of distance information.
The reprojection of the odometer output can be carried out according to the distance between two adjacent frames of data, the proportionality coefficient and the direction angle.
d″i=Ni*d′i
x″i=x″i-1+d″i*cosθ′i
y"i=y"i-1+d"i*sinθ′i
Wherein, d ″)iRepresenting a reprojection distance; (x ″)i-1,y″i-1)、(x″i,y″i) Respectively representing the output reprojection results of two adjacent frames of the odometers.
And finally, drawing a visual map according to the re-projection result and acquiring scale information.
Finding the reprojection odometer output P ═ { P ″)1,p″2…p″nMaximum and minimum values.
Figure BDA0001523215220000054
Figure BDA0001523215220000055
Figure BDA0001523215220000056
Figure BDA0001523215220000057
Wherein, (x ″)i,y″i) Represents the reprojection p ″iCorresponding coordinates, x ″)min、x″max、y″min、y″maxAnd the maximum value and the minimum value of the X-axis output and the Y-axis output of all points in the output reprojection P of the representing odometer. Map scale information is calculated according to the size of the image.
x″dif=x″max-x″min
y″dif=y″max-y″min
Figure BDA0001523215220000061
Wherein, x ″)dif、y″difRespectively representing the maximum distances of the reprojection coordinates on the X axis and the Y axis; s represents a map scale; w represents the visual map size that needs to be created.
In summary, the method provided by the embodiment can recover the scale by adding the IMU module providing the size, the ranging method with scale information can be applied to robot navigation and control thereof, and meanwhile, the method can be performed on the cloud in the specific implementation process, so that the implementation efficiency of the method can be improved.
Example 2
As shown in fig. 3, the present embodiment provides a cloud-based visual map scale detection apparatus, which includes: the system comprises a camera module, an IMU module, a visual odometer module and a processor;
the camera module is used for acquiring an image signal and sending the image signal to the visual odometer module;
the IMU module is used for acquiring an acceleration signal and sending the acceleration signal to the processor;
the visual odometer module is used for receiving the image signal sent by the camera module and acquiring the moving position information of each frame in each step according to the image signal;
the processor is configured with processor-executable operational instructions to perform operations comprising:
acquiring step length information of each step according to the acceleration signal;
and obtaining the scale information corresponding to the visual map to be established according to the mobile position information and the step length information.
Specifically, the camera module of the device described in this embodiment may be a monocular camera, and the image signal is acquired by the monocular camera, and the visual odometer module obtains the position information of each frame in each step by analyzing the image signal. The IMU module can estimate the step length by acquiring the acceleration information generated when the moving object moves, so that the scale information which is lacked in the visual mapping of the existing monocular camera is obtained. The processor is arranged in the cloud server, namely the cloud receives the acceleration signals and the moving position information which are respectively sent by the IMU module and the visual odometer module, and the step length information of each step is obtained according to the acceleration signals. And finally, obtaining the scale information corresponding to the visual map to be established according to the mobile position information and the step length information.
The device of the embodiment can output coordinates (x) according to the two-point visual odometer before and after each stepi-1,yi-1)、 (xi,yi) And step length diAnd mapping the coordinate points. And normalizing the scale according to the scale and the odometer information corresponding to each step, thereby providing scale information for drawing an accurate map. The specific process is as follows: obtaining a proportionality coefficient according to the mobile position information and the step length information; carrying out re-projection on the mobile position information according to the proportion coefficient to generate re-projection position information; and calculating to obtain the scale information corresponding to the visual map to be established according to the reprojection position information. Specifically, the method comprises the following steps:
firstly, acquiring an actual scale factor corresponding to each step, and acquiring a proper proportional coefficient for coordinate normalization processing.
For step i, the step size is diThe output of the odometer before and after the step is (x)i-1,yi-1)、(xi,yi) (ii) a The corresponding actual scale factor s during that step can be obtainedi
Figure BDA0001523215220000071
Then the left and right scale factor set S ═ S can be obtained in the process of drawing construction1,s2s3…sn-1,sn}. Find set S maximum, noted as Smax
smax=max{s1,s2s3…sn-1,sn}
For step i the actual scale factor s is scalediFor a maximum value smaxThe proportionality coefficient N is:
Figure BDA0001523215220000072
secondly, from step 1, the visual output of each step is remapped, i.e. reprojected, once to achieve scale normalization.
For the ith step, the corresponding proportionality coefficient is NiAnd all outputs of the visual odometer during this step are { p'1,p′2,p′3…p′kH, then from p'1Start once to p'2,p′3…p′kAnd carrying out re-projection.
Figure BDA0001523215220000073
Figure BDA0001523215220000074
Wherein, (x'i-1,y′i-1)、(x′i,y′i) Respectively representing the output of two adjacent frames of odometers in one step; theta'iRepresenting direction information of two adjacent frames; d'iRepresenting two adjacent frames of distance information.
The reprojection of the odometer output can be carried out according to the distance between two adjacent frames of data, the proportionality coefficient and the direction angle.
d″i=Ni*d′i
x″i=x″i-1+d″i*cosθ′i
y″i=y″i-1+d″i*sinθ′i
Wherein, d ″)iRepresenting a reprojection distance; ,(x″i-1,y″i-1)、(x″i,y″i) Respectively representing the output reprojection results of two adjacent frames of the odometers.
And finally, drawing a visual map according to the re-projection result and acquiring scale information.
Finding the reprojection odometer output P ═ { P ″)1,p″2…p″nMaximum and minimum values.
Figure BDA0001523215220000081
Figure BDA0001523215220000082
Figure BDA0001523215220000083
Figure BDA0001523215220000084
Wherein, (x ″)i,y″i) Represents the reprojection p ″iCorresponding coordinates, x ″)min、x″max、y″min、y″maxAnd the maximum value and the minimum value of the X-axis output and the Y-axis output of all points in the output reprojection P of the representing odometer. Map scale information is calculated according to the size of the image.
x″dif=x″max-x″min
y″dif=y″max-y″min
Figure BDA0001523215220000085
Wherein, x ″)dif、y″difRespectively representing the maximum distances of the reprojection coordinates on the X axis and the Y axis; s represents a map scale; w represents the visual map size to be created。
In summary, the device of the present embodiment can recover the scale by adding the IMU module providing the size, and the range finding method with scale information can be applied to robot navigation and control thereof. In addition, the processor described in this embodiment can be set up in the cloud to the form of cloud server is handled image signal and acceleration signal and is generated scale information.
Example 3
As shown in fig. 4, the present embodiment proposes an electronic device, where the electronic device includes: a display, a memory, one or more processors; and one or more modules stored in the memory and configured to be executed by the one or more processors, the one or more modules including instructions for performing steps in the cloud-based visual map scale detection method of embodiment 1.
Example 4
The present embodiment proposes a computer program product comprising a computer program stored on a non-volatile computer-readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the steps of the cloud-based visual map scale detection method according to embodiment 1.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A cloud-based visual map scale detection method is characterized by comprising the following steps:
acquiring an image signal, and acquiring moving position information of each frame in each step according to the image signal;
acquiring an acceleration signal, and acquiring step length information of each step according to the acceleration signal;
obtaining the scale information corresponding to the visual map to be established according to the mobile position information and the step length information;
the specific process of obtaining the scale information corresponding to the visual map to be established according to the mobile position information and the step length information comprises the following steps:
obtaining a proportionality coefficient according to the mobile position information and the step length information;
carrying out re-projection on the mobile position information according to the proportion coefficient to generate re-projection position information;
and calculating to obtain the scale information corresponding to the visual map to be established according to the reprojection position information.
2. The method according to claim 1, wherein the specific process of obtaining the scaling factor according to the moving position information and the step information is:
obtaining an actual scale factor corresponding to each step according to the mobile position information and the step length information;
collecting the actual scale factors corresponding to each step to generate an actual scale factor set;
screening out the largest actual scale factor from the set of actual scale factors;
and obtaining the proportional coefficient corresponding to each step according to the ratio of the maximum actual scale factor to the actual scale factor corresponding to each step.
3. The method according to claim 2, wherein the specific process of generating the reprojected location information by reprojecting the mobile location information according to the scaling factor is:
obtaining direction information and distance information of two adjacent points according to the mobile position information;
and carrying out re-projection calculation according to the direction information, the distance information and the proportionality coefficient to obtain re-projection position information corresponding to each step.
4. The method according to claim 3, wherein the specific process of calculating and obtaining the scale information corresponding to the visual map to be established according to the reprojection position information comprises:
respectively collecting the abscissa and the ordinate in the re-projection position information corresponding to each step to generate an abscissa set and an ordinate set;
screening out the maximum value and the minimum value of the abscissa from the abscissa set;
obtaining a maximum distance value of the abscissa by taking a difference value between the maximum value of the abscissa and the minimum value of the abscissa;
screening out the maximum value and the minimum value of the ordinate in the ordinate set;
obtaining a maximum distance value of the ordinate by taking a difference value between the maximum value of the ordinate and the minimum value of the ordinate;
and obtaining the scale information corresponding to the visual map to be established according to the maximum distance value of the abscissa and the maximum distance value of the ordinate.
5. A cloud-based visual map scale detection apparatus, the apparatus comprising: the system comprises a camera module, an IMU module, a visual odometer module and a processor;
the camera module is used for acquiring an image signal and sending the image signal to the visual odometer module;
the IMU module is used for acquiring an acceleration signal and sending the acceleration signal to the processor;
the visual odometer module is used for receiving the image signal sent by the camera module and acquiring the moving position information of each frame in each step according to the image signal;
the processor is configured with processor-executable operational instructions to perform operations comprising:
acquiring step length information of each step according to the acceleration signal;
obtaining the scale information corresponding to the visual map to be established according to the mobile position information and the step length information;
the processor is configured with processor-executable operational instructions to perform operations comprising:
obtaining a proportionality coefficient according to the mobile position information and the step length information;
carrying out re-projection on the mobile position information according to the proportion coefficient to generate re-projection position information;
and calculating to obtain the scale information corresponding to the visual map to be established according to the reprojection position information.
6. The apparatus of claim 5, wherein the processor is configured with processor-executable operating instructions to:
obtaining an actual scale factor corresponding to each step according to the mobile position information and the step length information;
collecting the actual scale factors corresponding to each step to generate an actual scale factor set;
screening out the largest actual scale factor from the set of actual scale factors;
and obtaining the proportional coefficient corresponding to each step according to the ratio of the maximum actual scale factor to the actual scale factor corresponding to each step.
7. The apparatus of claim 6, wherein the processor is configured with processor-executable operating instructions to:
obtaining direction information and distance information of two adjacent points according to the mobile position information;
and carrying out re-projection calculation according to the direction information, the distance information and the proportionality coefficient to obtain re-projection position information corresponding to each step.
8. The apparatus of claim 7, wherein the processor is configured with processor-executable operating instructions to:
respectively collecting the abscissa and the ordinate in the re-projection position information corresponding to each step to generate an abscissa set and an ordinate set;
screening out the maximum value and the minimum value of the abscissa from the abscissa set;
obtaining a maximum distance value of the abscissa by taking a difference value between the maximum value of the abscissa and the minimum value of the abscissa;
screening out the maximum value and the minimum value of the ordinate in the ordinate set;
obtaining a maximum distance value of the ordinate by taking a difference value between the maximum value of the ordinate and the minimum value of the ordinate;
and obtaining the scale information corresponding to the visual map to be established according to the maximum distance value of the abscissa and the maximum distance value of the ordinate.
9. An electronic device, characterized in that the electronic device comprises: a display, a memory, one or more processors; and one or more modules stored in the memory and configured to be executed by the one or more processors, the one or more modules comprising instructions for performing the steps of the method of any of claims 1-4.
10. A computer program product, characterized in that the computer program product comprises a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to carry out the steps of the method according to any one of claims 1 to 4.
CN201711421833.5A 2017-12-25 2017-12-25 Cloud-based visual map scale detection method and device Active CN108534757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711421833.5A CN108534757B (en) 2017-12-25 2017-12-25 Cloud-based visual map scale detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711421833.5A CN108534757B (en) 2017-12-25 2017-12-25 Cloud-based visual map scale detection method and device

Publications (2)

Publication Number Publication Date
CN108534757A CN108534757A (en) 2018-09-14
CN108534757B true CN108534757B (en) 2021-01-15

Family

ID=63489622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711421833.5A Active CN108534757B (en) 2017-12-25 2017-12-25 Cloud-based visual map scale detection method and device

Country Status (1)

Country Link
CN (1) CN108534757B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1995919A (en) * 2006-12-22 2007-07-11 凯立德欣技术(深圳)有限公司 Automatic control method, device, and equipment for navigation image scale
CN102419180A (en) * 2011-09-02 2012-04-18 无锡智感星际科技有限公司 Indoor positioning method based on inertial navigation system and WIFI (wireless fidelity)
CN103674028A (en) * 2013-12-27 2014-03-26 上海大唐移动通信设备有限公司 Positioning test method and positioning test device of indoor advancing track
CN105976402A (en) * 2016-05-26 2016-09-28 同济大学 Real scale obtaining method of monocular vision odometer
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN107516326A (en) * 2017-07-14 2017-12-26 中国科学院计算技术研究所 Merge monocular vision and the robot localization method and system of encoder information

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090082711A (en) * 2008-01-28 2009-07-31 삼성전자주식회사 Method and system of step length estimation in the pedestrian navigation System

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1995919A (en) * 2006-12-22 2007-07-11 凯立德欣技术(深圳)有限公司 Automatic control method, device, and equipment for navigation image scale
CN102419180A (en) * 2011-09-02 2012-04-18 无锡智感星际科技有限公司 Indoor positioning method based on inertial navigation system and WIFI (wireless fidelity)
CN103674028A (en) * 2013-12-27 2014-03-26 上海大唐移动通信设备有限公司 Positioning test method and positioning test device of indoor advancing track
CN105976402A (en) * 2016-05-26 2016-09-28 同济大学 Real scale obtaining method of monocular vision odometer
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN107516326A (en) * 2017-07-14 2017-12-26 中国科学院计算技术研究所 Merge monocular vision and the robot localization method and system of encoder information

Also Published As

Publication number Publication date
CN108534757A (en) 2018-09-14

Similar Documents

Publication Publication Date Title
CN109242903B (en) Three-dimensional data generation method, device, equipment and storage medium
CN111783820B (en) Image labeling method and device
CN106803271B (en) Camera calibration method and device for visual navigation unmanned aerial vehicle
WO2018119889A1 (en) Three-dimensional scene positioning method and device
US9530235B2 (en) Aligning panoramic imagery and aerial imagery
CN111586360A (en) Unmanned aerial vehicle projection method, device, equipment and storage medium
CN106033621B (en) A kind of method and device of three-dimensional modeling
CN110879400A (en) Method, equipment and storage medium for fusion positioning of laser radar and IMU
CN111737518B (en) Image display method and device based on three-dimensional scene model and electronic equipment
CN109523597A (en) The scaling method and device of Camera extrinsic
CN106570907B (en) Camera calibration method and device
CN113034347B (en) Oblique photography image processing method, device, processing equipment and storage medium
CN111754579A (en) Method and device for determining external parameters of multi-view camera
CN113137968B (en) Repositioning method and repositioning device based on multi-sensor fusion and electronic equipment
CN106204605A (en) A kind of localization method and device
CN113029128A (en) Visual navigation method and related device, mobile terminal and storage medium
CN111127584A (en) Method and device for establishing visual map, electronic equipment and storage medium
WO2023005457A1 (en) Pose calculation method and apparatus, electronic device, and readable storage medium
CN107442973B (en) Welding bead positioning method and device based on machine vision
JP2007299312A (en) Object three-dimensional position estimating device
JP7425169B2 (en) Image processing method, device, electronic device, storage medium and computer program
CN108534757B (en) Cloud-based visual map scale detection method and device
CN110827337B (en) Method and device for determining posture of vehicle-mounted camera and electronic equipment
CN113628284B (en) Pose calibration data set generation method, device and system, electronic equipment and medium
CN115578432A (en) Image processing method, image processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant