WO2023070441A1 - Procédé et appareil de positionnement de plateforme mobile - Google Patents

Procédé et appareil de positionnement de plateforme mobile Download PDF

Info

Publication number
WO2023070441A1
WO2023070441A1 PCT/CN2021/127050 CN2021127050W WO2023070441A1 WO 2023070441 A1 WO2023070441 A1 WO 2023070441A1 CN 2021127050 W CN2021127050 W CN 2021127050W WO 2023070441 A1 WO2023070441 A1 WO 2023070441A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
movable platform
map
feature
image
Prior art date
Application number
PCT/CN2021/127050
Other languages
English (en)
Chinese (zh)
Inventor
赵峰
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/127050 priority Critical patent/WO2023070441A1/fr
Priority to CN202180101909.4A priority patent/CN117940739A/zh
Publication of WO2023070441A1 publication Critical patent/WO2023070441A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • G01C11/12Interpretation of pictures by comparison of two or more pictures of the same area the pictures being supported in the same relative position as when they were taken
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Definitions

  • the present application relates to the field of positioning, and in particular to a positioning method and device for a movable platform.
  • the mobile platform Without the assistance of external positioning devices (such as GPS/RTK, etc.), the mobile platform needs to rely on its own sensors for positioning (such as visual odometer), but as time goes by, the positioning accuracy of the mobile platform It will gradually decline. At this time, it is necessary to reposition the movable platform to reduce the positioning error.
  • external positioning devices such as GPS/RTK, etc.
  • offline maps unable to reflect changes in the current environment in real time, especially
  • the availability of offline maps across a long period of time (such as the time dimension of spring, summer, autumn and winter) will decrease rapidly, which will gradually reduce the success rate of relocation; limited by computing power and real-time requirements, online maps can generally only create three-dimensional sparse points The map, and the position of the sparse point is generally not accurate enough, and the positioning through the map is easy to make the relocation error larger.
  • the present application provides a positioning method and device for a movable platform.
  • the embodiment of the present application provides a positioning method for a mobile platform, including:
  • the historical motion operation is an operation before the first motion operation
  • the movable platform is positioned according to the image captured by the photographing device at the current moment, the online map and the offline map, and a first positioning result of the movable platform is obtained.
  • an embodiment of the present application provides a positioning device for a movable platform, the device comprising:
  • a storage device for storing program instructions
  • One or more processors calling the program instructions stored in the storage device, when the program instructions are executed, the one or more processors are individually or jointly configured to implement the first aspect described method.
  • the embodiment of the present application provides a mobile platform, including:
  • the positioning device is arranged on the fuselage.
  • an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method described in the first aspect is implemented.
  • the application does not rely solely on the online map or the offline map when performing visual relocation, but positions the mobile platform by combining the online map and the offline map.
  • it can cover the operation scene as much as possible, and improve the success rate of positioning of the mobile platform; and, integrate the effective information of the online map and the offline map, and improve the positioning accuracy of the mobile platform.
  • FIG. 1 is a schematic flowchart of a positioning method for a mobile platform in an embodiment of the present application
  • Fig. 2 is a schematic diagram of an implementation process for locating a movable platform and obtaining a first positioning result of the movable platform according to an image captured by a photographing device at a current moment, an online map, and an offline map in an embodiment of the present application;
  • FIG. 3 is a schematic flowchart of a positioning method for a mobile platform in another embodiment of the present application.
  • Fig. 4 is a schematic diagram of a movable platform performing trajectory repetition in the first target area A and the second target area B in an embodiment of the present application;
  • Fig. 5 is an embodiment of the application according to the second position information of the second feature point in the online map that matches the first feature point in the image at each current moment and the offline map with each current moment
  • the third position information of the third feature point matched with the first feature point in the image, the movable platform is positioned, and the schematic diagram of the implementation process of obtaining the first positioning result of the movable platform;
  • Fig. 6 is a schematic diagram of a movable platform moving in a target area in an embodiment of the present application
  • Fig. 7 is a schematic structural diagram of a positioning device for a movable platform in an embodiment of the present application.
  • At least one means one or more, and “multiple” means two or more.
  • “And/or” describes the association relationship of associated objects, indicating that there can be three types of relationships, for example, A and/or B, which can mean: A exists alone, A and B exist at the same time, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the contextual objects are an “or” relationship.
  • “At least one of the following" or similar expressions refer to any combination of these items, including any combination of single or plural items.
  • At least one (unit) of a, b, or c can represent: a, b, c, a and b, a and c, b and c, or a and b and c, wherein a, b, c can be single or multiple.
  • the mobile platform in this embodiment of the present application may include unmanned aerial vehicles (such as drones), robots, agricultural machinery, and the like.
  • the positioning method of the embodiment of the present application can be applied to fields such as autonomous return and trajectory replay of movable platforms, such as autonomous return of drones, automatic inspection of robot warehouses, and automatic agricultural operations.
  • Fig. 1 is a schematic flow chart of a positioning method for a mobile platform in an embodiment of the present application; the execution body of the positioning method in the embodiment of the present application may include a mobile platform or an independent controller provided on the mobile platform or may A combination of a mobile platform and the aforementioned stand-alone controller.
  • a method for positioning a mobile platform provided in an embodiment of the present application may include steps S11 to S14.
  • the movable platform may include single or multiple photographing devices, wherein, when the movable platform includes multiple photographing devices, the multiple photographing devices face different directions of the movable platform, and are used to obtain images of the environment in different directions of the movable platform image.
  • an online map of the target area is established according to the images captured by the photographing device at different times.
  • the movable platform when the movable platform includes a single camera, an online map of the target area is established based on the images captured by the single camera at different times; The images captured at all times are used to establish an online map of the target area, and multiple shooting devices correspond to multiple online maps.
  • Feature extraction can be performed on the images captured by the shooting device at different times to obtain the position information of the first feature point in each image and the feature expression of each first feature point; then, the same
  • the feature points are associated with the feature points, combined with the pose of the shooting device when shooting each image, the second position information of the second feature point is obtained, and combined with the feature expression, an online map is obtained.
  • the characteristic expression may include color and/or texture; of course, the content of the characteristic expression may also include other environmental characteristic expressions.
  • the feature expression can be represented by such as orb/sift/cnn feature, and can also be represented by other image features.
  • Feature expression can be in vector form or in other ways.
  • the first location information is two-dimensional location information
  • the second location information is three-dimensional location information.
  • the offline map of the target area established when the movable platform executes the historical motion operation in the target area is acquired, and the historical motion operation is the operation before the first motion operation.
  • the historical motion operation is the previous operation of the first motion operation.
  • the time interval between the historical motion operation and the first motion operation will not be too long to prevent the geographical environment of the target area from being affected due to the long time interval.
  • the variation is large, so that the offline map cannot accurately reflect the characteristics of the target area when the first motion operation is performed.
  • the way to build the offline map is similar to the way to build the online map.
  • the offline map is built when the movable platform performs historical motion operations in the target area.
  • An offline map has been created during a motion operation.
  • the offline map can be stored in the storage device of the mobile platform, or in an external storage device (such as the storage device of the remote control terminal of the mobile platform).
  • the offline map is acquired from the storage device of the mobile platform or an external storage device.
  • the movable platform is positioned according to the image captured by the photographing device at the current moment, the online map and the offline map, and a first positioning result of the movable platform is obtained.
  • the first positioning result is used to indicate the pose of the movable platform.
  • Fig. 2 is a schematic diagram of an implementation process for positioning a movable platform and obtaining a first positioning result of the movable platform according to an image captured by a photographing device at the current moment, an online map, and an offline map in an embodiment of the present application;
  • an implementation process of locating a movable platform and obtaining a first positioning result of the movable platform according to an image captured by a photographing device at the current moment, an online map, and an offline map may include S21-S24.
  • the first position information is two-dimensional position information
  • the second position information is three-dimensional position information
  • the third position information is also three-dimensional position information.
  • the first location information, the second location information, and the third location information are all coordinates of the real world.
  • the matching of the first feature point with the second feature point means that the first feature point is associated with the second feature point, and the first position information of the first feature point and the second position information of the second feature point are used to indicate the location of the target area. the same location.
  • Matching of the first feature point with the third feature point means that the first feature point is associated with the third feature point, and the first position information of the first feature point and the third position information of the third feature point are used to indicate the location of the target area. the same location.
  • S22 and S23 may be executed synchronously, or may be executed sequentially, for example, S22 is executed first, and then S23 is executed, or S23 is executed first, and then S22 is executed.
  • the movable The platform is positioned, and the first positioning result of the movable platform is obtained.
  • the offline map and the online map complement each other, which can cover the operation scene as much as possible and improve the success rate of positioning of the mobile platform; The precision of the relocation.
  • the second position information of the movable platform is determined according to the second position information of the second feature point matching the first feature point in the online map. positioning result; according to the third position information of the third feature point matching the first feature point in the offline map, determine the third positioning result of the movable platform; fuse the second positioning result and the third positioning result to determine the The first location result for mobile platforms.
  • both the second positioning result and the third positioning result are used to indicate the pose of the movable platform.
  • the ransac pnp algorithm is used to process the second position information of the second feature point matching the first feature point in the online map to obtain the second positioning result of the movable platform; of course, it can also be used Other algorithms process the second position information of the second feature point matching the first feature point in the online map to obtain the second positioning result of the movable platform.
  • the ransac pnp algorithm is used to obtain the third positioning result of the movable platform for the third position information of the third feature point matching the first feature point in the offline map; of course, other algorithms can also be used A third positioning result of the movable platform is obtained for third position information of a third feature point matching the first feature point in the offline map.
  • the ransac pnp algorithm is used to process the second position information of the second feature point matched with the first feature point in the online map to obtain the second positioning result of the movable platform, and the ransac pnp algorithm is used to process the offline
  • the third position information of the third feature point matching the first feature point in the map is used to obtain the third positioning result of the movable platform.
  • p is the first positioning result
  • p 1 and p 2 are the second positioning result and the third positioning result respectively
  • w 1 and w 2 are the first weight and the second weight respectively.
  • determining the first weight and the second weight may include various methods. For example, in some embodiments, determining the first weight of an online image and the second weight of an offline image may include the following steps:
  • steps (1) and (2) can be executed synchronously or sequentially, for example, (1) is executed first, and then (2), or (2) is executed first, and then (1) is executed.
  • the first weight is positively correlated with the first quantity, that is, the larger the first quantity, the greater the first weight; the smaller the first quantity, the smaller the first weight.
  • the second weight is positively correlated with the second quantity, that is, the larger the second quantity, the greater the second weight; the smaller the second quantity, the smaller the second weight.
  • determining the first weight of the online image and the second weight of the offline image may include the following steps:
  • the first pixel distance and the second pixel distance can be used to characterize the reprojection error.
  • the first weight is negatively correlated with the first pixel distance, that is, the larger the first pixel distance is, the larger the reprojection error is, and the smaller the first weight is; the smaller the first pixel distance is, the larger the reprojection error is.
  • the smaller the value the greater the first weight.
  • the second weight is negatively correlated with the second pixel distance, that is, the larger the second pixel distance is, the larger the reprojection error is, and the smaller the second weight is; the smaller the second pixel distance is, the smaller the reprojection error is. The greater the weight of the second.
  • the process of realizing the fusion positioning method of S24 may include: according to the second position information of the second feature point matched with the first feature point in the online map and the second position information of the second feature point matched with the first feature point in the offline map
  • the third position information of the third feature point forms the first data group; the position information in the first data group is processed by ransac pnp algorithm to obtain the first positioning result of the movable platform.
  • This fusion method uses the fusion at the feature level, which improves the success rate and positioning accuracy of the mobile platform positioning.
  • the first data set is the second position information of the second feature point matching the first feature point in the online map and the third position information of the third feature point matching the first feature point in the offline map collection of information.
  • the ransac pnp algorithm can also be replaced by other relocation algorithms.
  • the positioning method of the embodiment of the present application does not rely solely on online maps or offline maps when performing visual relocation.
  • the offline maps have high precision but are not real-time, and the online maps have low precision but are available in real time.
  • This application combines online maps and offline maps.
  • Mobile platform for positioning, online map and offline map complement each other, can cover the operation scene as much as possible, improve the success rate of mobile platform positioning; and integrate the effective information of online map and offline map to improve the positioning accuracy of mobile platform .
  • FIG. 3 is a schematic flowchart of a positioning method for a movable platform in another embodiment of the present application; referring to FIG. 3 , the positioning method in an embodiment of the present application may further include S31-S34.
  • the feature expression corresponding to the third feature point in the offline map is updated according to the feature expression of the fifth feature point.
  • Updates in this embodiment of the application may include additions or deletions.
  • the fifth feature is added to the feature expression of the corresponding third feature point in the offline map feature representation of points.
  • the update of the offline map is not simply to replace the original feature expression, but to establish a multi-modal feature expression.
  • Offline maps generally have problems that cannot cope with long-term changes. For example, the characteristics of spring, summer, autumn and winter in the current area are completely different. For example, a tree in the target area is green in summer and yellow in autumn.
  • the feature expression is represented by a vector
  • the difference is greater than the preset threshold. That is, when the difference between the feature expression of the fifth feature point and the feature expression of the corresponding third feature point of the offline map is greater than the preset threshold, it is considered that the feature expression of the fifth feature point is different from the feature expression of the corresponding third feature point of the offline map Inconsistent.
  • the size of the preset threshold can be set as required.
  • Each feature expression of the third feature point in the offline map is the corresponding feature expression of the corresponding fifth feature point matched last time, that is, the feature expression of each mode of the third feature point in the offline map
  • Both are the feature expressions of the corresponding modes of the corresponding fifth feature point matched last time, which improves the long-term usability and robustness of the offline map.
  • the characteristic expression may include color and/or texture; of course, the content of the characteristic expression may also include other environmental characteristic expressions.
  • the feature expression of the fifth feature point is matched with the feature expression of the corresponding third feature point in the offline map; when the feature expression of the third feature point in the offline map is not matched to the corresponding fifth feature
  • the duration of the characteristic expression of the point is greater than or equal to the preset duration
  • the corresponding characteristic expression is deleted. That is, when the feature expression of a certain modality has not been successfully matched for a long time, it is considered that the environment has undergone irreversible changes (for example, a certain number of trees in the target area has been cut down), and the feature expression of the modality is directly deleted.
  • the size of the preset duration can be set as required.
  • the above positioning method can be tested, for example, see Figure 3, under the condition of no external positioning source (such as GPS/TK, etc.), the environmental features in the first target area A are simplified by wavy lines, and in the second target area B The environmental features at the same position as the environmental features of the first target area A are also simplified by wavy lines, and the environmental features at positions different from the environmental features of the first target area A in the second target area B are represented by straight lines, and the dotted lines Used to divide different local areas.
  • the movable platform moved in the first target area A and built an offline map, and then went to the second target area B for the first trajectory replay.
  • the movable platform can move accurately at the wavy line in the second target area B Repeat, gradually drifting in the straight line in the second target area B.
  • the movable platform performs the second trajectory replay in the second target area B, and it can be found that the movable platform can be accurately replayed, which shows that the movable platform has been replayed during the first trajectory replay.
  • Updated offline maps After the second trajectory replay, the movable platform moves into the first target area A, and the trajectory replay can still be performed, which means that the offline map stores the two modes of the first target area A and the second target area B. characteristic expression.
  • the embodiment of the present application also provides a positioning method, which may include:
  • the first partial area (A1) of the first target area and the second partial area (B1) of the second target area are in the same location area and have the same environmental characteristics.
  • the third partial area (A2) of the first target area and the fourth partial area (B2) of the second target area are in the same location area and have different environmental characteristics.
  • the first partial area (A1) is adjacent to the third partial area (A2), and the second partial area (B1) is adjacent to the fourth partial area (B2).
  • the first motion track is the same as the third motion track, and the second motion track is different from the fourth motion track.
  • the third motion track is the same as the fifth motion track, and the fourth motion track is the same as the sixth motion track.
  • the first motion track is the same as the seventh motion track
  • the eighth motion track is the same as the second motion track.
  • the online map created in real time by performing motion operations, if the features of the online map do not match some of the feature expressions of the offline maps of the similar location areas that have been experienced before, it can be converted into feature expressions of different modalities of the offline map for storage.
  • the traditional positioning method is based on a single shooting device, and the positioning success rate and accuracy are low.
  • the success rate and accuracy of relocation can be significantly improved by relocating the feature points of the multi-channel shooting device respectively.
  • the movable platform in this embodiment includes a plurality of photographing devices facing different directions of the movable platform for acquiring images of environments in different directions of the movable platform.
  • the movable translation platform may include a front-view camera device arranged at the front of the fuselage, a left-view camera device arranged at the left side of the fuselage, and a right-view camera device arranged at the right side of the fuselage.
  • the movable platform is positioned according to the image captured by the shooting device at the current moment, the online map and the offline map, and the implementation process of obtaining the first positioning result of the movable platform may include:
  • each photographing device there is an online map corresponding to each photographing device.
  • the online map corresponding to each photographing device is based on the images captured at different times when the photographing device performs the first movement operation in the target area. established.
  • S53 specifically, according to the first location information and the second location information, respectively relocate the image at each current moment and the online map, and determine that the corresponding online map matches the first feature point in the image at each current moment In this step, there is a one-to-one correspondence between the shooting device, the image at the current moment and the online map.
  • each photographing device there is an offline map corresponding to each photographing device.
  • the offline map corresponding to each photographing device is based on the images captured at different times when the photographing device performs historical motion operations in the target area. established.
  • S53 specifically, according to the first location information and the third location information, respectively relocate the image at each current moment and the offline map, and determine that the corresponding offline map matches the first feature point in the image at each current moment In this step, there is a one-to-one correspondence between the shooting device, the image at the current moment and the offline map.
  • the third position information of the third feature point is used to locate the movable platform, and obtain the first positioning result of the movable platform.
  • different multi-photographing device fusion positioning methods can be used. For example, in some embodiments, according to the first feature point of the second feature point in the online map that matches the first feature point in the image at each current moment 2. Position information and the third position information of the third feature point in the offline map that matches the first feature point in the image at each current moment, to determine the fourth positioning result of the movable platform corresponding to each shooting device; The fourth positioning results of the movable platform respectively corresponding to the plurality of photographing devices are fused to obtain the first positioning result of the movable platform. That is, repositioning is performed on each of the multiple photographing devices, and then the repositioning results of the multiple photographing devices are fused to obtain the first positioning result of the movable platform.
  • the fourth positioning result is used to indicate the pose of the movable platform.
  • the ransac pnp algorithm can be used to analyze the second position information of the second feature point in the online map of each shooting device that matches the first feature point in the image of the current moment captured by the shooting device and the offline map of the shooting device.
  • the third position information of the third feature point matched with the first feature point in the image captured by the camera at the current moment determines the fourth positioning result of the movable platform corresponding to the camera; of course, the ransac pnp algorithm also Can be replaced by other relocation algorithms.
  • the fourth positioning results of the movable platform respectively corresponding to the plurality of photographing devices are weighted and averaged to obtain the first positioning result of the movable platform.
  • the third weights corresponding to the plurality of photographing devices can be determined according to the relocation error of each photographing device. For example, for a certain photographing device, the greater the relocation error, the smaller the third weight corresponding to the photographing device; The smaller the error, the larger the third weight of the shooting device.
  • the relocation error can be determined according to the number of matching feature points and/or the reprojected pixel distance, as described in the corresponding part of the above embodiment, and will not be repeated here.
  • the process of implementing the multi-camera fusion positioning method of S55 may include: according to the second position information of the second feature point in the online map that matches the first feature point in the image at each current moment And the third position information of the third feature point in the offline map matched with the first feature point in the image at each current moment to form the second data set; use ransac pnp algorithm or bundle adjustment algorithm to the second data set
  • the location information is processed to obtain the first positioning result of the movable platform.
  • This fusion method uses feature-level fusion for multi-directional fusion relocation, rather than simply calculating a relocation result for each direction and then performing fusion.
  • the multi-directional fusion relocation method of the embodiment of the present application can Greatly improve the success rate and positioning accuracy of the movable platform positioning.
  • the movable platform is provided with a first sensor 10 and a second sensor 20 , and the first sensor 10 and the second sensor 20 face different orientations of the fuselage of the movable platform.
  • the embodiment of the present application also provides a positioning method, which may include:
  • step (2) The relocation of the movable platform in step (2) is very inaccurate.
  • control the movable platform to perform three motion operations in the target area, control the first sensor 10 or the second sensor 20 to collect images of the target area, and obtain the motion trajectory of the three operations based on the image and offline map positioning;
  • the relocation of the movable platform in step (3) is more accurate than the relocation of the movable platform in step (2).
  • the trajectory deviation between the trajectory of the secondary operation and the trajectory of the first operation is greater than the deviation between the trajectory of the three operations and the trajectory of the first operation, which shows that the mobile platform uses the multi-camera device fusion relocation method in the above embodiment .
  • the UAV uses the downward-looking shooting device and the forward-looking shooting device for fusion positioning, and without an external positioning source (such as GPS/TK, etc.), an environment with poor forward-looking and downward-looking is constructed. Covering a certain shooting device, the UAV cannot be repositioned through another shooting device in a single direction, and cannot complete accurate trajectory replay. As long as the downward-looking camera device and the front-view camera device are not blocked, relocation can be performed, which shows that the UAV uses the multi-camera device fusion relocation method in the above-mentioned embodiment.
  • an embodiment of the present application further provides a positioning device of the movable platform.
  • the device for positioning a mobile platform may include a storage device and one or more processors.
  • the storage device is used for storing program instructions.
  • the storage device stores an executable instruction computer program of the positioning method of the movable platform
  • the storage device may include at least one type of storage medium, and the storage medium includes a flash memory, a hard disk, a multimedia card, a card-type memory (for example, SD or DX memory, etc.), Random Access Memory (RAM), Static Random Access Memory (SRAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Programmable Read Only Memory (PROM ), magnetic storage, magnetic disks, optical disks, etc.
  • the positioning means of the movable platform may cooperate with network storage means performing the storage function of the memory through a network connection.
  • the memory may be an internal storage unit of the positioning device of the movable platform, such as a hard disk or a memory of the positioning device of the movable platform.
  • Memory also can be the external storage device of the positioning device of movable platform, for example the plug-in type hard disk equipped on the positioning device of movable platform, smart memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card , Flash Card (Flash Card) and so on.
  • the storage may also include both an internal storage unit of the positioning device of the movable platform and an external storage device.
  • Memory is used to store computer programs and other programs and data needed by the device.
  • the memory can also be used to temporarily store data that has been output or will be output.
  • One or more processors call the program instructions stored in the storage device, and when the program instructions are executed, the one or more processors are individually or jointly configured to implement the positioning method in the above embodiments.
  • the processor of this embodiment can implement the positioning method of the embodiment shown in FIG. 1 or FIG. 2 or FIG. 3 or FIG. 5 of this application.
  • the positioning device of this embodiment can be described with reference to the positioning method of the above embodiment.
  • the processor can be a central processing unit (Central Processing Unit, CPU), and can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), on-site Programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • An embodiment of the present application provides a movable platform.
  • the movable platform may include a fuselage and the positioning device in the above embodiment, and the positioning device is arranged on the fuselage.
  • the mobile platform in this embodiment of the present application may include unmanned aerial vehicles (such as drones), robots, agricultural machinery, and the like.
  • an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method described in the first aspect is implemented.
  • the computer-readable storage medium may be an internal storage unit of the removable platform described in any of the foregoing embodiments, such as a hard disk or a memory.
  • the computer-readable storage medium can also be an external storage device of a removable platform, such as a plug-in hard disk equipped on the device, a smart memory card (Smart Media Card, SMC), an SD card, a flash memory card (Flash Card) wait.
  • the computer-readable storage medium may also include both an internal storage unit of the removable platform and an external storage device.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by the removable platform, and can also be used to temporarily store outputted or to-be-exported data.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Procédé et appareil de positionnement de plateforme mobile, le procédé consistant : lorsqu'une plateforme mobile exécute une première opération de déplacement dans une zone cible, à acquérir des images prises par un appareil photographique sur la plateforme mobile (S11) ; à établir une carte en ligne de la zone cible en fonction des images prises par l'appareil photographique à différents moments (S12) ; à acquérir une carte hors ligne de la zone cible établie lors de l'exécution par la plateforme mobile d'une opération de déplacement historique dans la zone cible, l'opération de déplacement historique étant une opération préalable à la première opération de déplacement (S13) ; et à positionner la plateforme mobile en fonction de l'image prise par l'appareil photographique au moment en cours, de la carte en ligne et de la carte hors ligne, de façon à obtenir un premier résultat de positionnement de la plateforme mobile (S14). Grâce au procédé, lorsqu'un repositionnement visuel est effectué, la carte en ligne et la carte hors ligne se complètent mutuellement, de façon à couvrir au maximum un scénario de fonctionnement, ce qui permet d'améliorer le taux de réussite et la précision de positionnement de la plateforme mobile.
PCT/CN2021/127050 2021-10-28 2021-10-28 Procédé et appareil de positionnement de plateforme mobile WO2023070441A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/127050 WO2023070441A1 (fr) 2021-10-28 2021-10-28 Procédé et appareil de positionnement de plateforme mobile
CN202180101909.4A CN117940739A (zh) 2021-10-28 2021-10-28 可移动平台的定位方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/127050 WO2023070441A1 (fr) 2021-10-28 2021-10-28 Procédé et appareil de positionnement de plateforme mobile

Publications (1)

Publication Number Publication Date
WO2023070441A1 true WO2023070441A1 (fr) 2023-05-04

Family

ID=86160356

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/127050 WO2023070441A1 (fr) 2021-10-28 2021-10-28 Procédé et appareil de positionnement de plateforme mobile

Country Status (2)

Country Link
CN (1) CN117940739A (fr)
WO (1) WO2023070441A1 (fr)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170004345A1 (en) * 2015-07-02 2017-01-05 Topcon Corporation Survey data processing device, survey data processing method, and survey data processing program
WO2017020856A1 (fr) * 2015-08-05 2017-02-09 普宙飞行器科技(深圳)有限公司 Dispositif et procédé de photographie utilisant un drone pour suivre et photographier automatiquement un objet mobile
CN108021884A (zh) * 2017-12-04 2018-05-11 深圳市沃特沃德股份有限公司 基于视觉重定位的扫地机断电续扫方法、装置及扫地机
CN109084732A (zh) * 2018-06-29 2018-12-25 北京旷视科技有限公司 定位与导航方法、装置及处理设备
CN109544615A (zh) * 2018-11-23 2019-03-29 深圳市腾讯信息技术有限公司 基于图像的重定位方法、装置、终端及存储介质
CN109648568A (zh) * 2019-01-30 2019-04-19 北京镁伽机器人科技有限公司 机器人控制方法、系统及存储介质
CN109822568A (zh) * 2019-01-30 2019-05-31 北京镁伽机器人科技有限公司 机器人控制方法、系统及存储介质
CN109887087A (zh) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 一种车辆的slam建图方法及系统
CN110457414A (zh) * 2019-07-30 2019-11-15 Oppo广东移动通信有限公司 离线地图处理、虚拟对象显示方法、装置、介质和设备
CN111311756A (zh) * 2020-02-11 2020-06-19 Oppo广东移动通信有限公司 增强现实ar显示方法及相关装置
CN111699453A (zh) * 2019-07-01 2020-09-22 深圳市大疆创新科技有限公司 可移动平台的控制方法、装置、设备及存储介质
US20200342626A1 (en) * 2018-06-19 2020-10-29 Tencent Technology (Shenzhen) Company Limited Camera localization method and apparatus, terminal, and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170004345A1 (en) * 2015-07-02 2017-01-05 Topcon Corporation Survey data processing device, survey data processing method, and survey data processing program
WO2017020856A1 (fr) * 2015-08-05 2017-02-09 普宙飞行器科技(深圳)有限公司 Dispositif et procédé de photographie utilisant un drone pour suivre et photographier automatiquement un objet mobile
CN108021884A (zh) * 2017-12-04 2018-05-11 深圳市沃特沃德股份有限公司 基于视觉重定位的扫地机断电续扫方法、装置及扫地机
US20200342626A1 (en) * 2018-06-19 2020-10-29 Tencent Technology (Shenzhen) Company Limited Camera localization method and apparatus, terminal, and storage medium
CN109084732A (zh) * 2018-06-29 2018-12-25 北京旷视科技有限公司 定位与导航方法、装置及处理设备
CN109544615A (zh) * 2018-11-23 2019-03-29 深圳市腾讯信息技术有限公司 基于图像的重定位方法、装置、终端及存储介质
CN109648568A (zh) * 2019-01-30 2019-04-19 北京镁伽机器人科技有限公司 机器人控制方法、系统及存储介质
CN109822568A (zh) * 2019-01-30 2019-05-31 北京镁伽机器人科技有限公司 机器人控制方法、系统及存储介质
CN109887087A (zh) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 一种车辆的slam建图方法及系统
CN111699453A (zh) * 2019-07-01 2020-09-22 深圳市大疆创新科技有限公司 可移动平台的控制方法、装置、设备及存储介质
CN110457414A (zh) * 2019-07-30 2019-11-15 Oppo广东移动通信有限公司 离线地图处理、虚拟对象显示方法、装置、介质和设备
CN111311756A (zh) * 2020-02-11 2020-06-19 Oppo广东移动通信有限公司 增强现实ar显示方法及相关装置

Also Published As

Publication number Publication date
CN117940739A (zh) 2024-04-26

Similar Documents

Publication Publication Date Title
CN108986161B (zh) 一种三维空间坐标估计方法、装置、终端和存储介质
US20210141378A1 (en) Imaging method and device, and unmanned aerial vehicle
CN110568447B (zh) 视觉定位的方法、装置及计算机可读介质
WO2019119328A1 (fr) Procédé de positionnement basé sur la vision et véhicule aérien
US9981742B2 (en) Autonomous navigation method and system, and map modeling method and system
CN108955718B (zh) 一种视觉里程计及其定位方法、机器人以及存储介质
WO2020164092A1 (fr) Procédé et appareil de traitement d'image, plateforme mobile, engin volant sans pilote embarqué et support de stockage
CN110176032B (zh) 一种三维重建方法及装置
CN111192331B (zh) 一种激光雷达和相机的外参标定方法及装置
CN111337947A (zh) 即时建图与定位方法、装置、系统及存储介质
KR102347239B1 (ko) 라이다와 카메라를 이용하여 이미지 특징점의 깊이 정보를 향상시키는 방법 및 시스템
WO2021016854A1 (fr) Procédé et dispositif d'étalonnage, plateforme mobile et support de stockage
CN109213202B (zh) 基于光学伺服的货物摆放方法、装置、设备和存储介质
WO2021195939A1 (fr) Procédé d'étalonnage pour paramètres externes d'un dispositif de photographie binoculaire, plateforme mobile et système
WO2022179094A1 (fr) Procédé et système d'étalonnage conjoint de paramètre externe de lidar monté sur véhicule, support et dispositif
CN111612794A (zh) 基于多2d视觉的零部件高精度三维位姿估计方法及系统
WO2022077296A1 (fr) Procédé de reconstruction tridimensionnelle, charge de cardan, plate-forme amovible et support de stockage lisible par ordinateur
CN113052907B (zh) 一种动态环境移动机器人的定位方法
JP2019032218A (ja) 位置情報記録方法および装置
WO2022156447A1 (fr) Procédé et appareil de localisation, appareil informatique et support de stockage lisible par ordinateur
WO2020237478A1 (fr) Procédé de planification de vol et dispositif associé
US20210156710A1 (en) Map processing method, device, and computer-readable storage medium
WO2022088613A1 (fr) Procédé et appareil de positionnement de robot, dispositif et support de stockage
CN113034347A (zh) 倾斜摄影图像处理方法、装置、处理设备及存储介质
WO2023070441A1 (fr) Procédé et appareil de positionnement de plateforme mobile

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21961807

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180101909.4

Country of ref document: CN