CN114723780A - Position tracking platform based on-demand images - Google Patents

Position tracking platform based on-demand images Download PDF

Info

Publication number
CN114723780A
CN114723780A CN202111665151.5A CN202111665151A CN114723780A CN 114723780 A CN114723780 A CN 114723780A CN 202111665151 A CN202111665151 A CN 202111665151A CN 114723780 A CN114723780 A CN 114723780A
Authority
CN
China
Prior art keywords
camera
image
drone
target asset
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111665151.5A
Other languages
Chinese (zh)
Inventor
萨法维·赛德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiqiao Venture Capital Co ltd
Original Assignee
Xiqiao Venture Capital Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/143,059 external-priority patent/US20210256712A1/en
Application filed by Xiqiao Venture Capital Co ltd filed Critical Xiqiao Venture Capital Co ltd
Publication of CN114723780A publication Critical patent/CN114723780A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

An image processing system is disclosed that includes a plurality of drones flying over a geographic area. In some embodiments, the geographic region is within a cell coverage area of a cellular transmission tower. In some embodiments, the cellular transmission tower is capable of communicating with a cellular telephone transceiver within the lead drone through a cellular telephone network. In some such embodiments, one or more drones have a camera capable of taking relatively high resolution pictures of features on the ground and the ground below the drone. The ground area that the camera is capable of capturing may include an area directly below each of the other drones. The image may then be compared to other images. Using an image recognition algorithm, the processor may identify the target asset and track the target asset based on a comparison of the images.

Description

Position tracking platform based on-demand images
CROSS-REFERENCE TO RELATED APPLICATION-RIGHTS OR PRIORITY
This application is a commonly assigned and co-pending continuation-in-part application, previously filed on 2019, 3, 15, entitled "on-demand outdoor image-based location tracking platform," from U.S. application No.16/355,443, the disclosure of which is incorporated herein by reference in its entirety, and claims the benefit of priority to this prior U.S. application No.16/355,443 in accordance with 35 USC § 120. Application No.16/355,443 claims priority from U.S. provisional application No.62/643,501 entitled "location tracking platform based on-demand outdoor images" filed on 2018, 3, 15, which U.S. provisional application No.62/643,501 is incorporated herein by reference in its entirety.
Technical Field
Various embodiments described herein relate to systems and methods for performing position location, and more particularly, to systems and methods for accurate location and position tracking of objects and people in indoor and outdoor environments.
Background
In view of the rise of applications such as smart cities, networked automobiles, and the Internet of Things (IoT), various location-based applications are becoming more and more important, and thus the need for accurate positioning and location tracking is increasing. People are using position location for all content that is navigated from the location where the picture was taken by the marker to the individual. More and more companies integrate location-based services into their platforms to increase the productivity and predictability of their services.
In most cases, the method used by applications that require knowledge of the device location requires the local receiver to access the Global Positioning System (GPS). Other competitive global navigation satellite systems also exist, such as GLONASS, and others. One major drawback of such global navigation satellite systems (e.g., current GPS-based systems) is that they all require a relatively sensitive GPS receiver located on the object being tracked. This is not necessarily effective, practical or feasible, especially in critical situations such as security threats or emergency scenarios (e.g., natural disasters, etc.). Furthermore, there are cases where it is difficult to receive the necessary signals of the satellite transmissions of the current global navigation satellite system. This may be due to the use of satellite receivers located indoors or the inherent difficulty in attempting to receive satellite signals in the presence of obstructions to the satellite signal (e.g., tall buildings, foliage, etc.).
In addition, most target assets (e.g., objects and people) require transmitters to juxtapose the target asset and send the information obtained by the target asset to the processing system, which then evaluates the transmitted information. The need for a transmitter increases the power consumption, cost and complexity of the device present with the target asset.
Accordingly, there is a need for a system for locating and tracking a target asset without requiring a transmitter or receiver on the tracked target asset.
Disclosure of Invention
Drawings
FIG. 1 is a schematic diagram of one example of a system in accordance with the disclosed methods and apparatus;
FIG. 2 is a schematic view of an indoor camera mounted on an interior wall of a building;
FIG. 3 is a schematic diagram of a system in accordance with embodiments of the disclosed method and apparatus;
FIG. 4 shows an example of a position tracking step based on 2D rotation when the region of interest is directly below the camera field of view;
fig. 5 shows an example of a position tracking step based on 3D rotation when the region of interest is not located below the camera field of view and has an arbitrary tilt angle.
Like reference numbers and designations in the various drawings indicate like elements.
Detailed Description
The methods and apparatus of the present disclosure use various hardware devices and hardware platforms in conjunction with software algorithms to identify, locate and/or track target assets. In some embodiments, digital signal processing and image processing are used to perform the desired tasks. In some embodiments, the target asset includes an object such as, but not limited to, a vehicle, an electronic device, a car key, a person, and the like. Some embodiments of the disclosed methods and apparatus provide location-based services without requiring complex, expensive, or cumbersome equipment associated with the target asset. Such embodiments eliminate the need for tracking devices, transmitters, or receivers carried by, attached to, or otherwise present on or at the target asset's location.
The disclosed methods and apparatus may also be helpful in a variety of related applications, such as identifying situations and opportunities of particular interest. In some embodiments, these opportunities and situations include identifying the location of an empty parking space, finding a particular building from an image of the building or an image on or near the building without the system knowing the building address, identifying and finding lost or misplaced items in an enclosed environment, and the like. In some embodiments, a unique structure or identifying characteristic of a building (e.g., a logo with a company name or other physical name occupying the building) is used to find the building. In some embodiments, image processing-based techniques are used to accurately identify and/or locate target assets. Some such embodiments of the disclosed methods and apparatus use Artificial Intelligence (AI) to help locate or identify a target asset. In other embodiments, AI independent techniques are used.
FIG. 1 is a diagram of one embodiment of the disclosed method and apparatus. In accordance with the disclosed method and apparatus, the system 100 uses one or more cameras 103. In some embodiments, the cameras 103a are mounted on one or more drones 102, 104 flying over a geographic area 110. In some such embodiments, the lead drone 102 has a processor that allows the lead drone 102 to control and coordinate the operation of the secondary drone 104. In the example shown, the lead drone 102 is explicitly shown as having a camera 103 a. However, for simplicity, the secondary drone 104 may also have a camera that is not shown in the figure.
It should be noted by the present disclosure that the reference numerals used in the drawings may include a numeric character followed by an alphabetic character, e.g., 103a, wherein the numeric character "103" is followed by the alphabetic character "a". Reference numerals having the same numerical characters refer to features of the figures that are similar in structure or function, or both. For example, the cameras 103a, 103b perform similar functions, however each camera 103 may be associated with an installation. Further, only the numerical characters of reference numerals may be used to collectively refer to like features. For example, in the present disclosure, "camera 103" refers to the drone-mounted camera 103a and any other cameras, such as the wall-mounted camera 103b shown in fig. 2.
In some embodiments, the geographic region 110 is within a cellular coverage area 111 of a cellular transmission tower 112. The cellular transmission tower 112 facilitates communication between the cellular telephone core network 106 and various communication modules 105 within components of the system 100 (e.g., the communication module 105a in the lead drone 102, the smart phone 113, etc.). In some embodiments, the core network 106 provides the communication module 105 with access to cloud-based services, cloud-connected devices (e.g., cloud servers 116), and other communication networks.
In some embodiments of the disclosed methods and apparatus, the drone camera 103 is used to determine a relatively rough estimate of the location of the target asset. Once the target asset is detected by processing the picture from the camera of the drone, a rough estimate of the ground location of the target can be determined, based on the altitude and field of view of the drone. Once the approximate location of the camera's field of view is identified, an image of an area map covering the depicted region may be extracted from an API of a mapping service (e.g., google map). In some cases, this extraction may be performed automatically. Alternatively, the image of the relevant area map may be extracted from the zone database. Once the image of the relevant map is obtained, the image rotation algorithm, scaling and image fitting processes described above can fit the image of the map into the picture, and then fine-position the target asset. In some embodiments, such services are provided by a processor within cloud server 116.
The tracked target asset is located by taking a picture of the field of view, rotating (and in some embodiments scaling the information provided in the picture) to fit an image of the map (e.g., information obtained from google maps), and inferring the object location through image recognition.
In other embodiments, the drone is equipped with an accurate position tracking system so that the position of the drone can be accurately determined (e.g., using a satellite position location system, ground triangulation, drone triangulation, other position location techniques, or a combination of one or more of these techniques).
The disclosed method and apparatus are capable of providing very accurate real-time location information about a target asset. Further, the disclosed methods and apparatus may be used to find a particular object or person by matching information derived from pictures taken by a camera to a database and using an object or pattern recognition algorithm to locate a target asset. After locating the target asset, the system 100 may follow the target asset. In some embodiments where a drone is used to support a camera, the drone may move accordingly to maintain visual contact with the target asset.
The ground area that the camera can capture may include the entire area directly under all drones. Alternatively, the image taken by the camera may capture only the geographic area under the drone with the camera or the area under the kid of the drone 102, 104.
In other embodiments, the secondary drone 104 is outside the area captured by the image taken by the camera in the lead drone 102, at least for some portion of the time that the drone is providing information for use by the system 100, and possibly for the entire time. Nonetheless, in some embodiments, each of the secondary drones 104 may communicate with the lead drone 102. In some such cases, each secondary drone 104 may also communicate with other secondary drones 104. In some embodiments, such communication is through a cellular telephone network or through a local area network. In other embodiments, other communication systems may be used instead of or in addition to the cellular telephone network. As will be explained in more detail below, the presence of multiple drones at the top of the region of interest improves and in some cases simplifies the ability to fit an image of a map to a picture taken. In some embodiments where the drone takes a picture of the area below the drone, the picture needs to be rotated by a 2D rotation mechanism. When the camera is positioned over the area of interest (or tracking area), an image of the map may be fitted to a picture produced by the camera's view of the area of interest. Each pixel in the picture is then assigned a coordinate based on the coordinates of the corresponding feature in the image of the map.
For example, using a 4k camera on a drone flying 100 meters above the region of interest may give a position tracking accuracy (depending on the field of view) of about less than 1 m/pixel. This is better than that obtained from a GPS unit (depending on hardware and coordinate system, etc.).
When taking pictures from areas outside the closest area under the drone, 3D rotation may be required. The 3D rotation is typically more complex and may require artificial intelligence to assist in the image map matching process.
In some embodiments, the lead drone 102 may also communicate with an internet gateway 114. The internet gateway 114 provides a means by which pictures of the scene 115 taken by the camera 103 within the lead drone 102 (and possibly images taken by the camera 103 within the secondary drone 104 or mounted on a fixed support indoors or outdoors) may be transmitted over the internet 117 to a cloud-based server 116 or other resource within the cloud. The image may then be compared to another image 118, such as an image taken by satellite 119. Using image recognition algorithms, the processor 116 in the cloud may then identify a target asset, such as a person of the marathon, and track the target asset based on a comparison of images captured by the cameras within the drones 102, 104 with images and other feature data known to the processor 116 in an independent manner.
Fig. 2 is an illustration of the indoor camera 103b mounted on a wall 204 inside a building 206. In some embodiments, the system 100 uses a combination of the indoor camera 103b and the outdoor camera 103a to capture information.
In some embodiments, the cameras 103 are located at known locations and are capable of communicating with other components of the system 100 through associated communication modules 105. In some embodiments, at least one of the communication modules 105 is integrated into one or more associated cameras 103, wherein the communication module 105 is electrically coupled to the camera 103. In such embodiments, the other communication module 105 may be external to the camera 103, but integrated in a component of the system 100 (e.g., the drone 104 in which the camera 103 is also located) and electrically coupled with the associated camera 103. In some embodiments, one communication module 105 may be electrically coupled to several associated cameras 103 and provide wireless access to several associated cameras 103. The system 100 may use a camera 103 located on a fixed platform (e.g., wall-mounted camera 103b in fig. 2) or a mobile platform (e.g., camera 103a mounted on the drone 102 in fig. 1).
In some embodiments, the components of system 100 communicate wirelessly with each other, for example, over a cellular network, or over a Local Area Network (LAN) using WiFi or other wireless communication systems. The position of the camera 103 may be fixed, for example when the camera 103 is part of a wall, light pole and ceiling mounting, or the position of the camera 103 may change over time, for example if the camera 103 is mounted on a vehicle, robot or drone. Such a camera 103 takes a picture of the scene 115, the person 208 or the object of interest within a particular field of view.
In some embodiments where the indoor camera 202 is part of the system 100, the indoor camera is also connected to a cellular telephone transceiver.
Fig. 3 is a diagram of system 100. A camera, such as camera 103b, is mounted on the wall 208 (as shown in fig. 2), or a camera, such as camera 103a, is mounted within the drone 102 having a cellular telephone transceiver 302, with the camera 103b coupled to the cellular telephone transceiver 302. One or more of the drones 102, 104 has a camera 103 capable of taking relatively high resolution pictures of the ground and ground features below the drones 102, 104.
In some embodiments, an image of the area map may be fitted within the picture using a technique known as "image fitting. Objects within the picture may then be identified and correlated with objects within the image of the area map. Thus, the target asset may be accurately located within the area map and/or with respect to known locations of other features and/or objects identified within the picture in relation to features and/or objects having known locations in the image of the map. Some embodiments use complex image processing algorithms that attempt pattern matching, image rotation, and in some embodiments scaling to find the best fit. In some cases, the picture is digitally rotated and/or scaled to fit the image of the area map to the picture. In other embodiments, the image of the area map may be digitally rotated and/or scaled to match the orientation and relative size of the picture. Thus, when a "best fit" is found, the system 100 may provide the location of the target asset with respect to features and objects having known locations within the image of the map.
Depending on the particular application of the method and apparatus (e.g., whether to locate a missing object, such as a missing car, identify an empty space, find a desired person, etc.), other techniques such as facial feature recognition, object detection, etc. are used in some embodiments.
In some such embodiments of the disclosed method and apparatus, a Machine Learning (ML) algorithm is used for object recognition prior to determining the location and location tracking of the target asset. In other embodiments, a Deep Neural Network (DNN) is used for object detection. In other embodiments, one or more AI algorithms for performing facial recognition are used to detect human body images. For moving target assets, image rotation-based and in some embodiments zoom-based position tracking algorithms may be used to update the position of the target asset on a per image frame basis.
Fig. 4 shows an example of a position tracking step based on 2D rotation when the region of interest is directly below the camera field of view. The figure shows an exemplary image 410 taken by a camera at the top of a tracking or locating area. The object of interest for location tracking or localization is a truck parked alongside a parking lot building. Assume that a van of interest has been identified by an object detection mechanism, such as an object detection neural network architecture based on sliding windows [1], R _ CNN (region CNN), Histogram of Oriented Gradients (HOG) [2], YOLA [3 ]. The mechanism draws a box 412 around the detected object of interest. In one embodiment, once the object is spatially identified on the picture, the next step is to perform an edge detection 414 mechanism. Edge detection 414, basically looks for boundaries that span a particular object (e.g., a road, a building 422, etc.). In different embodiments, the number and type of edge detected objects may be different. These edges can be obtained by various AI techniques, such as specific filters in Convolutional Neural Network (CNN) architectures. The box containing the object of interest 416 is also transferred to the diagram 420, while other image details may be removed. This simplification can greatly contribute to the processing load of the image rotation at step 444. Step 424 performs a 2D rotation of the image 420 reduced with the edge subset. In one embodiment of the invention, the 2D rotate 424 mechanism starts in small steps and rotates the diagram 420 to its rotated version 430. The edge matching block 434 then electronically overlays the image 430 on top of the map 440 and attempts to find the difference between the two images. In some embodiments, the edge matching process creates an edge detection process to identify edges of equivalent buildings 432, roads 433, and objects on the map. At the output of this process, a simplified version of the map 440 is created for comparison with the image 430. This is shown in fig. 4 as image 450. The mechanism in fig. 4 then attempts to compare the rotated image 430 to the simplified map 450 by finding the difference between the pixels of the two images, and adjusts the rotation angle and image scaling to minimize the difference. This difference may be defined as an error function that may be minimized by various algorithms, such as a Gradient Descent (GD) algorithm. This error minimization can be considered as an iterative process that minimizes the gradient between the two images. In another embodiment, the error function may be defined using a statistical machine learning algorithm (e.g., K-nearest neighbor). Once the error function is minimized, the location of the object may be identified by the location of the box 426 (i.e., box 446) on the image 450. This task is performed by the position estimation block 454.
Fig. 5 shows an example of a position tracking step based on 3D rotation when the region of interest is not lower than the camera field of view and has an arbitrary tilt angle. In one embodiment, the initial picture taken by camera 302 is 3D rotated to create an image estimate of top view 510. In many cases, this is a complex process that involves the creation of a 3D image of a 2D picture, which is then rotated towards the top view or 90 ° view. In some embodiments, the 3D rotation task may be performed using a leading edge Deep Neural Network (DNN), such as an auto-encoder or a generate-confrontation network (GAN).
In one embodiment, after 3D rotation of the image, the process is similar to fig. 4. In this case, edge detection is performed by module 514, followed by 2D rotation 524 and edge matching 534 with map 540 or simplification thereof 550. After the feedback mechanism and error are minimized, the location of the object of interest is then identified by the localization box 526 on the map 550.
While the disclosed methods and apparatus have been described above in terms of various examples of embodiments and implementations, it should be understood that the specific features, aspects, and functions described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described. Thus, the breadth and scope of the claimed invention should not be limited by any of the examples provided in describing the above-disclosed embodiments.
Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be considered open ended and not limiting. As the above example: the term "including" is to be understood as "including but not limited to," and the like; the term "example" is used to provide an example of the item in question, not an exhaustive or limiting list thereof; the terms "a" or "an" should be understood to mean "at least one," "one or more," and the like; adjectives such as "conventional," "traditional," "normal," "standard," "known," and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Likewise, when this document refers to technologies that are obvious or known to one of ordinary skill in the art, such technologies include those that are obvious or known to one of ordinary skill in the art now or at any time in the future.
A group of items linked with the conjunction "and" should not be read as requiring that each and every one of those items be present in the grouping, but rather should be read as "and/or" unless expressly stated otherwise. Likewise, a group of items linked with the conjunction "or" should not be read as requiring mutual exclusivity among that group, but rather should be read as "and/or" unless expressly stated otherwise. Furthermore, although items, elements or components of the disclosed methods and apparatus may be described or claimed in the singular, the plural is contemplated to be within the scope thereof unless limitation to the singular is explicitly stated.
In certain instances, the presence of expansion words and phrases such as "one or more," "at least," "but not limited to," or other like phrases should not be construed to mean that the narrower case is intended or required where such expansion phrases may not be present. The use of the term "module" does not mean that the components or functions described as part of the module or claimed are all configured in a common package. Indeed, any or all of the various components of a module, whether control logic or other components, may be combined in a single package or maintained separately, and may further be distributed in multiple groups or packages, or across multiple locations.
Furthermore, the various embodiments set forth herein are described with the aid of block diagrams, flowcharts, and other illustrations. It will become apparent to those skilled in the art upon reading this document that the illustrated embodiments and various alternatives thereof may be practiced without limitation to the illustrated examples. For example, block diagrams and their accompanying description should not be construed as imposing particular architectures or configurations.

Claims (1)

1. An image processing system comprising:
(a) a set of outdoor cameras on a fixed or mobile platform; and
(b) a processor in the cloud connected to the Internet and in communication with the set of outdoor cameras, the processor configured to use images received from the set of outdoor cameras and compare the received images to other images taken by satellites, and to use an image recognition algorithm to identify a target asset and track the target asset based on a comparison of at least one captured image in the set of outdoor cameras.
CN202111665151.5A 2021-01-06 2021-12-31 Position tracking platform based on-demand images Pending CN114723780A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/143,059 US20210256712A1 (en) 2018-03-15 2021-01-06 On-Demand Image Based Location Tracking Platform
US17/143,059 2021-01-06

Publications (1)

Publication Number Publication Date
CN114723780A true CN114723780A (en) 2022-07-08

Family

ID=82236476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111665151.5A Pending CN114723780A (en) 2021-01-06 2021-12-31 Position tracking platform based on-demand images

Country Status (1)

Country Link
CN (1) CN114723780A (en)

Similar Documents

Publication Publication Date Title
US10339387B2 (en) Automated multiple target detection and tracking system
EP2423871B1 (en) Apparatus and method for generating an overview image of a plurality of images using an accuracy information
EP3729227B1 (en) Image based localization for unmanned aerial vehicles, and associated systems and methods
US8902308B2 (en) Apparatus and method for generating an overview image of a plurality of images using a reference plane
Yahyanejad et al. Incremental mosaicking of images from autonomous, small-scale uavs
KR101415016B1 (en) Method of Indoor Position Detection Based on Images and Mobile Device Employing the Method
CN112050810B (en) Indoor positioning navigation method and system based on computer vision
KR20060082872A (en) System and method for geolocation using imaging techniques
US11036240B1 (en) Safe landing of aerial vehicles upon loss of navigation
US20220377285A1 (en) Enhanced video system
Kato et al. NLOS satellite detection using a fish-eye camera for improving GNSS positioning accuracy in urban area
US20230360234A1 (en) Detection of environmental changes to delivery zone
US20210256712A1 (en) On-Demand Image Based Location Tracking Platform
US20190286876A1 (en) On-Demand Outdoor Image Based Location Tracking Platform
KR102542556B1 (en) Method and system for real-time detection of major vegetation in wetland areas and location of vegetation objects using high-resolution drone video and deep learning object recognition technology
CN107357936A (en) It is a kind of to merge multi-source image automatically to provide the context aware system and method for enhancing
CN114723780A (en) Position tracking platform based on-demand images
Partanen et al. Implementation and Accuracy Evaluation of Fixed Camera-Based Object Positioning System Employing CNN-Detector
Shukla et al. Automatic geolocation of targets tracked by aerial imaging platforms using satellite imagery
Silva Filho et al. UAV visual autolocalizaton based on automatic landmark recognition
Pi et al. Deep neural networks for drone view localization and mapping in GPS-denied environments
Suzuki et al. GNSS photo matching: Positioning using gnss and camera in urban canyon
KR102602420B1 (en) A radar system for tracking a moving object using two-dimensional ground surveillance radar and altitude information of map data and a method for extracting altitude information using the same
CN111931638B (en) Pedestrian re-identification-based local complex area positioning system and method
RU2816087C1 (en) Autonomous optical local positioning and navigation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication