WO2021141666A2 - Navigation de véhicule sans pilote, et procédés, systèmes et support lisible par ordinateur associés - Google Patents

Navigation de véhicule sans pilote, et procédés, systèmes et support lisible par ordinateur associés Download PDF

Info

Publication number
WO2021141666A2
WO2021141666A2 PCT/US2020/060473 US2020060473W WO2021141666A2 WO 2021141666 A2 WO2021141666 A2 WO 2021141666A2 US 2020060473 W US2020060473 W US 2020060473W WO 2021141666 A2 WO2021141666 A2 WO 2021141666A2
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
feature
image
location
code
Prior art date
Application number
PCT/US2020/060473
Other languages
English (en)
Other versions
WO2021141666A3 (fr
Inventor
Ahmad Y. Al Rashdan
Michael L. Wheeler
Roger Lew
Dakota Roberson
Lloyd M. Griffel
Roger BOZA
Michael W. Thompson
Original Assignee
Battelle Energy Alliance, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Battelle Energy Alliance, Llc filed Critical Battelle Energy Alliance, Llc
Priority to US17/755,878 priority Critical patent/US20220383541A1/en
Publication of WO2021141666A2 publication Critical patent/WO2021141666A2/fr
Publication of WO2021141666A3 publication Critical patent/WO2021141666A3/fr

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/102Simultaneous control of position or course in three dimensions specially adapted for aircraft specially adapted for vertical take-off of aircraft
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3602Input other than that of destination using image analysis, e.g. detection of road signs, lanes, buildings, real preceding vehicles using a camera
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target

Definitions

  • Patent Application Serial No. 62/934,976 filed November 13, 2019, for “Image-Driven Self-Navigation of Drones in Indoor Environments.” This application also claims the benefit of the filing date of United States Provisional Patent Application Serial No. 63/090,645, filed October 12, 2020, for “Route-Operable Unmanned Navigation of Drones (ROUNDS).”
  • Embodiments of the present disclosure relate generally to unmanned vehicle navigation, and more specifically to systems, methods, and computer-readable medium for navigating an unmanned vehicle within and/or around an environment via a number of visual features. Yet more specifically, some embodiments relate to an autonomous navigation system for navigating a vehicle within and/or around an environment via a number of image-based codes.
  • Unmanned vehicles which are also referred to as uncrewed vehicles or autonomous vehicles (e.g., remotely piloted aircraft systems (RPAS), unmanned aerial vehicles, autonomous aircraft, remotely piloted vehicles (RPVs), drones, and the like), are vehicles without an on-board human. Some unmanned vehicles are used is military applications such as, for example, surveillance, cargo delivery, bombing, and air support. Unmanned vehicles have also been used in non-military roles such as, delivering cargo and packages, aerial photography, geographic mapping, search and rescue, disaster management, agriculture management, wildlife monitoring, law enforcement surveillance, construction management, and storm tracking.
  • RPAS remotely piloted aircraft systems
  • RSVs remotely piloted vehicles
  • drones drones, and the like
  • Some unmanned vehicles are used is military applications such as, for example, surveillance, cargo delivery, bombing, and air support. Unmanned vehicles have also been used in non-military roles such as, delivering cargo and packages, aerial photography, geographic mapping, search and rescue, disaster management, agriculture management, wildlife monitoring, law enforcement surveillance,
  • the system may include one or more processors configured to communicatively couple with an unmanned vehicle.
  • the one or more processors may be configured to receive an image from the unmanned vehicle positioned within an environment and detect one or more features inserted into the environment and depicted in the image.
  • the one or more processors may be further be configured to determine a location of the unmanned vehicle based on the one or more features and convey one or more commands to the unmanned vehicle based on the location of the unmanned vehicle
  • One or more embodiments of the present disclosure include a method.
  • the method may include positioning a number of features within an environment.
  • the method may also include receiving an image from a vehicle positioned within or proximate to the environment.
  • the method may also include detecting at least one feature of the number of features within the image.
  • the method may include determining a location of the vehicle based on the at least one feature.
  • the method may further include conveying one or more commands to the vehicle based on the location of the vehicle.
  • Other embodiments may include a non-transitory computer-readable medium including computer-executable instructions that, when executed, perform acts.
  • the acts include detecting at least one feature inserted into an environment and depicted within an image captured via a vehicle within or proximate to the environment.
  • the acts may also include decoding information stored in the at least one feature.
  • the acts may further include determining a location of the vehicle relative to the at least one feature.
  • the acts may further include conveying one or more control signals to the vehicle based on the location of the vehicle and the information stored in the at least one feature.
  • FIG. 1 illustrates an example environment, including a vehicle and a number of visual features, in which one or more embodiments of the present disclosure may be configured to operate;
  • FIG. 2 depicts an example system including a number of modules, in accordance with various embodiments of the present disclosure
  • FIG. 3 depicts a code and a vehicle in a number of positions relative the code, in accordance with various embodiments of the present disclosure
  • FIG. 4 depicts an example control loop, according to various embodiments of the present disclosure
  • FIG. 5 depicts a code and a bounding box, according to various embodiments of the present disclosure
  • FIG. 6 illustrates an example system, according to various embodiments of the present disclosure
  • FIG. 7 is a flowchart of an example method of detecting codes, in accordance with various embodiments of the present disclosure.
  • FIGS. 8A-8D each depict a code, a vehicle, an one or more planes, which may be used in accordance with various embodiments of the present disclosure
  • FIG. 9 depicts an image plane and a code, according to various embodiments of the present disclosure.
  • FIGS. 10A-10E depict various example geometries, which may be used in accordance with various embodiments of the present disclosure
  • FIG. 11 depicts an example model for vehicle control, according to various embodiments of the present disclosure.
  • FIG. 12A illustrates an example image including a number of codes, in accordance with various embodiments of the present disclosure
  • FIGS. 12B and 12C depict example filter outputs including a number of codes, according to various embodiments of the present disclosure
  • FIGS. 12D and 12E depict example regions of interests including codes, in accordance with various embodiments of the present disclosure
  • FIG. 12F depicts an example heatmap result, in accordance with various embodiments of the present disclosure
  • FIG. 12G illustrates an example mask, according to various embodiments of the present disclosure
  • FIG. 12H depicts a number of codes including bounding boxes, according to various embodiments of the present disclosure.
  • FIG. 13 illustrates an example field of view of a camera, in accordance with various embodiments of the present disclosure
  • FIG. 14A depicts an example flow for identifying code locations, in accordance with various embodiments of the present disclosure
  • FIG. 14B depicts an example flow for processing codes, according to various embodiments of the present disclosure
  • FIG. 15 depicts an example control loop, in accordance with various embodiments of the present disclosure.
  • FIG. 16 is a flowchart of an example method of operating a navigation system, in accordance with various embodiments of the present disclosure.
  • FIG. 17 illustrates an example system which may be configured to operate according to one or more embodiments of the present disclosure.
  • a general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • a general-purpose processor may be considered a special-purpose processor while the general-purpose processor executes instructions (e.g., software code) stored on a computer-readable medium.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • embodiments may be described in terms of a process that may be depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe operational acts as a sequential process, many of these acts can be performed in another sequence, in parallel, or substantially concurrently. In addition, the order of the acts may be re-arranged.
  • a process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
  • the methods disclosed herein may be implemented in hardware, software, or both. If implemented in software, the functions may be stored or transmitted as one or more instructions or code on computer-readable media.
  • Computer-readable media include both computer storage media and communication media, including any medium that facilitates transfer of a computer program from one place to another.
  • any reference to an element herein using a designation such as “first,” “second,” and so forth, does not limit the quantity or order of those elements, unless such limitation is explicitly stated. Rather, these designations may be used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner.
  • a set of elements may comprise one or more elements.
  • unmanned vehicles e.g., drones
  • GPS global positioning system
  • compass e.g., for heading information
  • a vehicle e.g., a drone
  • a vehicle may be configured to capture (e.g., via a camera) a number images positioned at specific locations throughout an environment, wherein each image may include one or more features (e.g., visual features (e.g., codes, such as quick response codes (QR codes) or bar codes) and/or non-visual features of any suitable shape and/or size).
  • the one or more features which may include known sizes, colors, patterns, and/or shapes, may be inserted into the environment (e.g., and positioned at desired locations).
  • Images may be processed (i.e., via one or processors) and used to guide the vehicle along a desired route of the environment. More specifically, for example, one or more processors may be configured to receive an image from a vehicle, detect one or more codes in the image, decode the one or more codes, determine a location of the vehicle based on the one or more codes, and control the vehicle based on the location of the vehicle.
  • various embodiments disclosed herein may have various real-word applications.
  • various embodiments may be used for surveillance, data collection, and/or performance of various tasks within, for example, a non-GPS environment (e.g., an indoor environment, such as a nuclear power plant).
  • various embodiments of the present disclosure may allow for automation of activities within an environment.
  • a vehicle may perform periodic inspections and surveys and/or perform operator and security rounds.
  • a vehicle may be outfitted with tooling, sensors, and/or other devices and/or materials to enter areas that are hazardous to humans to perform tasks (e.g., inspections or other procedures).
  • a vehicle may be able to transport resources (e.g. materials, tools, documents) to and from a work site.
  • a vehicle may be configured to survey radiation fields and compliment humans to expedite tasks (e.g., enable supervisors, schedulers, and reviewers to remotely verify work progress).
  • various navigation systems described herein may be used to supplement and/or augment other (e.g., known) navigation methods, such as, for example only, simultaneous localization and mapping (SLAM), target tracking, and/or GPS (e.g., to increase accuracy and/or increase performance) (e.g., during at least part of a route).
  • SLAM simultaneous localization and mapping
  • target tracking e.g., to increase accuracy and/or increase performance
  • GPS e.g., to increase accuracy and/or increase performance
  • various embodiments may relate to using a first navigation system (e.g., SLAM, target tracking, GPS, etc.) during some parts of a navigation process and using a different navigation system (i.e., according to various embodiments described more fully below) during different parts of the navigation process.
  • a method may include alternating use of, for example, SLAM or another known navigation systems, and a navigation system according to various embodiments, as described more fully herein.
  • SLAM satellite-based advanced mobile platform
  • a navigation system may include alternating use of, for example, SLAM or another known navigation systems, and a navigation system according to various embodiments, as described more fully herein.
  • flying unmanned aerial vehicles e.g., flying drones
  • a “vehicle” or a “drone” includes, but is not limited to, air, land, or water vehicles.
  • a vehicle may include one or more cameras, including installed, integrated, or added cameras.
  • a feature may include a marker such as infrared or ultra-violate marker (e.g., detectable via an infrared or ultra-violate camera).
  • a feature may include a visual feature (e.g., a code) such as a QR code or a bar code.
  • FIGS. 1-5 Various embodiments of the present disclosure will be described generally with reference to FIGS. 1-5. Further, a first implementation of unmanned vehicle navigation will be described with reference to FIGS 1-11, 17, and 18. Moreover, a second, different implementation of unmanned vehicle navigation will be described with reference to FIGS. 1-5 and 12A-18.
  • FIG. 1 illustrates an example system 100 for navigating at least a portion of an environment 102, in accordance with one or more embodiments of the present disclosure.
  • System 100 includes a vehicle 104, which may include, for example only, an unmanned vehicle (e.g., an unmanned aerial vehicle (UAV), such as a drone).
  • System 100 further includes a number of visual features 106, which may include a code, such as a QR code, for example.
  • System 100 also includes a point (also referred to herein as a “nest”) 108, which may include a start/fmish point. According to some embodiments, point 108 may include a charging pad for charging vehicle 104.
  • UAV unmanned aerial vehicle
  • point 108 may include a charging pad for charging vehicle 104.
  • vehicle 104 may include charging pins such that vehicle 104 may establish electrical contact upon landing on or near point 108.
  • a dedicated-charging graphical user interface (GUI) for determining a charging status of vehicle 104 may be provided.
  • vehicle (also referred to herein as “drone”) 104 may travel from point 108 around and/or through at least a portion of environment 102 (e.g., along a preconfigured route 105) and return to point 108. Further, vehicle 104 may be configured to capture (e.g., via a camera of vehicle 104) visual features (e.g., codes, such as QR codes) 106 positioned at preconfigured locations within and/or proximate environment 102 to guide vehicle 104 on route 105 around and/or through environment 102.
  • visual features e.g., codes, such as QR codes
  • visual feature 106 may identified, extracted from an image, decoded (i.e., for its data (i.e., to map it to a location (e.g., as stored in a table and/or database) or provide additional route instructions)). For example, visual feature 106 may provide general localization data such as what building or hallway vehicle 104 is positioned and/or a route that vehicle 104 should fly after detecting and decoding the associated visual feature 106. Further, visual feature 106 may analyzed to determine a location of vehicle 104 with respect to the associated visual feature 106.
  • vehicle 104 may be determined by comparing known dimensions of visual feature 106 to its representation in the image. Further, based on the location of vehicle 104, one or more commands may be conveyed to vehicle 104 for controlling operation thereof.
  • vehicle 104 may be configured to receive commands synchronously and asynchronously.
  • vehicle 104 may couple to a controller (e.g., a host computer) (e.g., via WiFi), which may be configured to send and receive signals to and from vehicle 104.
  • a controller e.g., a host computer
  • vehicle 104 may be configured to receive x, y, and z translations (e.g., in meters) that correspond to directional moves, as well as a yaw command that allows vehicle 104 turn on its central axis to face different directions.
  • Vehicle 104 may be configured to move specified distances in loop (e.g., a closed loop) control.
  • Vehicle 104 may receive roll (strafe left to right), pitch (forward and backward), yaw (rotate left and right), and Gaz (rise and fall) values, wherein these values that may vary from 0% to 100% power or speed in either direction and may be limited if vehicle 104 is in a sensitive area. In some embodiments, vehicle 104 may be configured to move for as long as vehicle 104 receives a command.
  • vehicle 104 may be configured to hover in a specified locations (e.g., with a location accuracy in the order of 1 inch). Moreover, a vehicle 104 may fly without compromising flight stability and may provide robustness to accidental contact.
  • FIG. 2 depicts an example system 200 including a number of modules, in accordance with various embodiments of the present disclosure.
  • System 200 includes a main module 202 coupled to each of a vehicle module 204, a computer vision module 206, a location module 208, and a control module 210.
  • main module 202, vehicle module 204, computer vision module 206, location module 208, and/or control module 210 may be implemented as one or more software modules (e.g., as part of a software package).
  • system 200 may be configured to detect visual features in an image of a video stream provided by a vehicle, determine a location of the vehicle based on metrics derived from a visual feature and distortion within the image, and navigate the vehicle (i.e., via a number of commands) based on the location of the vehicle (i.e., relative to the visual feature).
  • main module 202 may receive an image from vehicle module 204 (i.e., including a vehicle) and convey the image to computer vision module 206. Further, as described more fully below, computer vision module 206 may detect a code (e.g., a QR code) in the image, generate a bounding box around the code, and convey the code including the bounding box to main module 202. Further, the bounding box and the code may be conveyed from main module 202 to location module 208, which, as described more fully below, may use data associated with the code and the code view in the image to calculate a position of the vehicle relative to the code.
  • a code e.g., a QR code
  • control module 210 may use the location information to convey one or more commands to vehicle module 204 for controlling the vehicle (e.g., for controlling a roll, pitch, yaw, and/or thrust of the vehicle). This cycle may be repeated (e.g., at a sub-second frequency) until the vehicle reaches a predetermined waypoint (e.g., as represented by a set of coordinates relative to the code).
  • main module 202 may not be necessary, and, in these embodiments, vehicle module 204, computer vision module 206, location module 208, and control module 210 may communicate with one another as necessary.
  • a known face-on position i.e., only displaced from QR code 302 in one (e.g., Z) dimension
  • a location of vehicle 304 i.e., at first position 308 relative to QR code 302 may be determined.
  • image deformation may be used to identify the actual position of vehicle 304 relative to QR Code 302.
  • a view of QR code 302 may be used independently of other views or codes to determine a location of vehicle 304.
  • a location of a code in one frame may be used to determine where to look for the code in a subsequent frame.
  • Control loop 400 includes a comparison node 402, a processor 404, a vehicle 406, a vision unit 408, and a processor 410.
  • Vision unit 408 and processor 410 may collectively be referred to as a “sensor loop” or “control loop.”
  • processor 404 and processor 410 may be a single processor or more than one processor.
  • processor 404 and/or processor 410 may include or may be part of control module 210, location module 208, and/or computer vision module 206 of FIG. 2.
  • vehicle 406 may include or may be part of vehicle module 204 of FIG. 2 and/or vision unit 408 may include or may be part of computer vision module 206 of FIG. 2.
  • vision unit 408 may detect a code (e.g., a QR code) in an image provided by vehicle 406 and convey the code to processor 410, which may determine the actual location of vehicle 406 based on the code. More specifically, processor 410 may be configured to calculate a relative distance from vehicle 406 to the code. Further, summation node 402 may be configured to receive a desired location for vehicle 406 and the actual location of vehicle 406 (e.g., from processor 410). Further, summation node 402 may provide an error value to processor 404, which may provide one or more commands to vehicle 406 based on the error value. More specifically, processor 404 may convey one or more commands to vehicle 406 for controlling one or more of a roll motion, a pitch motion, a yaw motion, and/or thrust of vehicle 406 in a number of (e.g., all) directions.
  • a code e.g., a QR code
  • vehicle 406 may execute an arbitrary task (e.g., hover for several seconds, take a high-resolution photo, or pivot a certain amount so that the next code is within view).
  • an arbitrary task e.g., hover for several seconds, take a high-resolution photo, or pivot a certain amount so that the next code is within view.
  • vehicle 406 may receive instruction regarding the next location in the route. More specifically, the next location, or waypoint, may be provided to vehicle 406 (e.g., via processor 404). Further, in some embodiments, instructions and/or other information may be incorporated in a feature.
  • a route location or waypoint may include the following parts: (1) where vehicle 406 needs to fly with respect to the visual feature in view (given in a Cartesian coordinate system), (2) a minimum distance vehicle 406 must get to (i.e., from the waypoint) (3) a number of updates, or frames processed, during which vehicle 406 calculates that vehicle 406 is hovering (e.g., stably) within a waypoint tolerance (e.g., 0.1 meters), (4) an action or maneuver vehicle 406 may take once vehicle 104 has arrived (e.g., stably) at the waypoint.
  • vehicle 406 which may include vehicle 104 of FIG. 1, may have a waypoint set at N meters (e.g., 2 meters) directly in front of a visual feature with a low tolerance of M meters (e.g., 0.1 meters) and P (e.g., 10) stable updates, ensuring vehicle 104 is in a precise location and hovering steadily.
  • P e.g. 10 th
  • a task may be performed to turn vehicle 104 (e.g., by a certain angle) and fly (e.g., “blindly”) for a distance (e.g., a few feet) before detecting the next visual feature.
  • a visual feature may be positioned directly in front of point 108 (see FIG. 1), and action item may be performed to land vehicle 104 at point 108.
  • vehicle 104 may acquire (i.e., via a camera) video and/or a number of images within environment 102.
  • FOV field of view
  • visual feature 106 is identified in an image
  • a bounding box is positioned around visual feature 106 in the image (e.g., see FIG.
  • visual feature 106 is extracted out of the image, visual feature 106 is decoded for its data (e.g., to map visual feature 106 to a location or provide additional route instructions), and visual feature 106 and its bounding box may be analyzed to determine the location of vehicle 104 with respect to the associated visual feature 106.
  • a feature e.g., visual feature 106 of FIG. 1
  • a bounding box e.g., bounding box 502 of FIG. 5
  • a visual feature may be detected via image processing and machine learning (ML) and/or deep learning (DL) algorithms.
  • ML and DL may utilize artificial neural networks (ANNs) and perform feature detection by analyzing input content, extracting meaningful features (e.g., edges, colors, and/or other distinct patterns), and learning a mathematical mapping function between an input and an output.
  • ANNs artificial neural networks
  • a mapping function may be developed by feeding images with visual features (e.g., QR codes) in different views and a bounding box around the visual features for the ANN to replicate the effort by creating a bounding box around the visual features in future images.
  • a QR code may be located randomly in an image and is not known a priori.
  • a QR code may be skewed horizontally and/or vertically depending on the perspective of a camera, and a QR code may be rotated at different angles relative to the rotation of the image and camera.
  • the size of the QR code may also vary due to its distance from the camera.
  • a single image may include multiple instances of an object to be detected. Moreover, a combination of these factors may happen simultaneously as they may not be mutually exclusive.
  • FIG. 6 depicts an example system 600 configured to identify locations of visual features in imagery, according to various embodiments of the present disclosure.
  • system 600 may be configured to receive an image including a code and generate a bounding box around a code. More specifically, system 600 may be configured to identify a code in an image (e.g., a video stream of images) and generate a bounding box (a contour) along the edges of the code.
  • system 600 may be configured to process multiple video frames per second, as a vehicle controller (e.g., a drone controller) may be configured for sub-second updates (e.g., for responsive and/or stable flight).
  • a vehicle controller e.g., a drone controller
  • sub-second updates e.g., for responsive and/or stable flight.
  • System 600 (also referred to herein as a “computer vision module”), which may include computer vision module 206 of FIG. 2, includes a DL module 608 and a ML module 610.
  • System 600 may be configured to receive image 602 and generate an output, which may include decoded data 605 stored in a detected visual feature 603 of image 602 and a bounding box 606 at least partially around detected visual feature 603.
  • an output may include more than one feature.
  • the output may include a subset image with visual feature 603.
  • System 600 may return, for example, “none” if a visual feature was not detected in image 602.
  • DL module 608 may include a convolutional neural network (CNN) configured for near/real-time object detection.
  • CNN convolutional neural network
  • DL module 608 may include a real-time detection system, such as You Only Look Once, version 3 (YOLOv3).
  • YOLOv3 You Only Look Once, version 3
  • Examples of such CNN-based object detection techniques are well known in the art. Non-limiting examples of such CNN-based object detection techniques include those shown in J. Redmon, A. Farhadi, “YOLOv3: An Incremental Improvement,” arXiv: 1804.02767, April 2018.
  • DL module 608 may be configured to extract meaningful features from an image (e.g., image 602) and modify a scale of the image and allow for multiscale predictions. DL module 608 may further be configured to detect if an object (e.g., code 603) is present. DL module 608 may have a fixed input size, determined by the number of neurons in a first layer, which may dictate the size of an image that may be processed. Images that do not match the input size of the network may need to be resized to the appropriate dimension.
  • ML module 610 which may include a computer vision and machine learning module, may include a dedicated module for detecting and decoding codes (e.g., QR codes).
  • ML module 610 may include, for example, an Open Computer Vision (OpenCV) library, which is an open-source library of programming functions for real-time computer vision, as will be appreciated by a person having ordinary skill in the art. .
  • OpenCV Open Computer Vision
  • DL module 608 may be configured to extract a QR code from an image and provide formatting to improve functionality of ML module 610. More specifically, for example, DL module 608 may remove at least some background “noise” of an image (e.g., image 602) and provide ML module 610 with a cropped image where a code (e.g., code 603) occupies the majority of the space of the image, thus reducing the likelihood of ML module 610 not detecting the code and also increasing the detection and decoding speed of ML module 610.
  • image 602 e.g., image 602
  • code e.g., code 603
  • ML module 610 may use an edge detection algorithm across an image to identify a hierarchy of points that forms a feature (e.g., code) signature.
  • a detect operation of ML module 610 may perform the localization of the code (e.g., code 603), and in response to vertices of the code being returned, a decode operation may be performed to decode a message encoded in the code, and a string containing the message (e.g., “Sample text” of FIG. 6) may be generated.
  • system 600 including DL module 608, may be trained to detect codes (e.g., QR codes) via neural network training methods, as will be appreciated by a person having ordinary skill in the art.
  • DL module 608 and/or ML module 610 may be via a training set that is generated to represent a feature (e.g., a code) in various view and distortions (e.g., by capturing pictures of the feature and/or virtually generating images of the feature in a simulated environment).
  • a feature e.g., a code
  • distortions e.g., by capturing pictures of the feature and/or virtually generating images of the feature in a simulated environment.
  • FIG. 7 is a flowchart of an example method 700 of detecting codes in an image, in accordance with various embodiments of the present disclosure.
  • Method 700 may be arranged in accordance with at least one embodiment described in the present disclosure.
  • Method 700 may be performed, in some embodiments, by a device or system, such as system 200 of FIG. 2, control loop 400 of FIG. 4, system 600 of FIG. 6, system 1700 of FIG. 17, or another device or system.
  • a device or system such as system 200 of FIG. 2, control loop 400 of FIG. 4, system 600 of FIG. 6, system 1700 of FIG. 17, or another device or system.
  • FIG. 7 is a flowchart of an example method 700 of detecting codes in an image, in accordance with various embodiments of the present disclosure.
  • Method 700 may be arranged in accordance with at least one embodiment described in the present disclosure.
  • Method 700 may be performed, in some embodiments, by a device or system, such as system 200 of FIG. 2, control loop 400 of FIG. 4, system 600 of FIG. 6, system 1700 of
  • an input image object (e.g., from a video stream) may be converted into another (“converted”) image object, and method may proceed to block 704.
  • an OpenCV image object (which may include accessible native camera data) may be converted into, for example only, a Python Imaging Library (PIL) image object (e.g., to be used by module DL 608 of FIG. 6).
  • PIL Python Imaging Library
  • a model (e.g. of DL module 608 of FIG. 6) may be run on the converted image object (i.e., to detect one or more codes (e.g., QR codes)), and method 700 may proceed to block 706.
  • codes e.g., QR codes
  • a bounding box region may be expanded (e.g., by a percentage p, such as between 5% and 20%) to ensure edges of the code do not fall outside an original bounding box region.
  • a bounding box may be fit to an actual border of the code (e.g., via image processing).
  • a scaled image including the expanded bounding box may be cropped at block 710 and/or resized at block 712, if necessary.
  • the image may be cropped and/or resized if dimensions of the image exceed a maximum size established for good performance (i.e., for detecting QR codes) in ML module 610 of FIG. 6.
  • the image may be scaled down to, for example, 1,080 pixels in the respective dimension and the remaining dimensions may be scaled by the appropriate factor to maintain the original aspect ratio.
  • the image may be cropped to eliminate a surrounding environment (e.g., to provide an image focused on one or more codes).
  • the scaled image may be provided to block 714, wherein a detect and decode operation (e.g., of ML module 610 of FIG. 6) may be performed.
  • method 700 may proceed from block 706 to block 712, where the original input image object may be resized (i.e., if necessary) and a detect and decode operation (e.g., of ML module 610 of FIG. 6) may be performed at block 714.
  • a detect and decode operation e.g., of ML module 610 of FIG. 6
  • method 700 may proceed from block 716 to block 718, where a bounding box (i.e., at least partially around a detected code) and possibly a decoded message is provided (e.g. by system 600 of FIG. 6). If a code was not detected at block 714, method 700 may proceed from block 716 to block 720, where a system (e.g., system 600) may return, for example, “none.” It is noted that method 700 may be repeated for each image and/or for a subset of images.
  • a bounding box i.e., at least partially around a detected code
  • a decoded message e.g. by system 600 of FIG. 6
  • a system e.g., system 600
  • method 700 may be repeated for each image and/or for a subset of images.
  • an image may include more than one code that that may be used to make a decision regarding location.
  • a single code may provide sufficient accuracy, and more than one code may improve accuracy of location identification.
  • one or more images may be filtered (e.g., for improving detectability).
  • a visual feature (e.g., visual feature 106; see FIG. 1) may be analyzed to determine a location of a vehicle (e.g., vehicle 104 of FIG. 1) with respect to the visual feature.
  • FIGS. 8A-8D each depict a vehicle 802, a QR code 804, and one or more planes.
  • a view e.g., a view from vehicle 802
  • QR code 804 can be visualized by rectangle 806, which represents an image plane viewed by vehicle 802 (i.e., a camera of vehicle 802).
  • Image plane 806 is defined as the plane parallel to a camera sensor (i.e. of vehicle 802) and onto which a representation of everything within the camera’s FOV is projected.
  • the representation of QR code 804 on image plane 806 may be distorted unless all edges are of equal distance to the point of reference inside the camera. Therefore, by measuring the distortion, the yaw and relative distance between QR code 804 and vehicle 802 may be calculated, and the axes of QR code 804 may be used as the coordinate system by which to determine a location of vehicle 802.
  • a bounding box or contour along the edges of QR code 804 may be extracted from a number of frames (e.g., each frame or every few frames) streamed back from a camera in real-time.
  • the scale and transformation of the contour’s edges may be used to determine a number of view angles (e.g., including the yaw angle of vehicle 802 relative to a face of QR code 804) and a relative position of vehicle 802 in a Cartesian coordinate system.
  • QR code 804 is assumed to possess any yaw rotation including zero. Pitch and roll of QR code 804 are fixed at zero as QR code 804 may be level and positioned on a vertical wall, and a self-stabilizing (e.g., gimbal mounted) camera may assure that the camera is pointing straight ahead along the X axis of vehicle 802 and perpendicular to gravity. These assumptions and controlled variables allow for the projection on the YZ plane to appear, as shown in FIG. 8C.
  • the axis normal to the surface of QR code 804 and the axis normal to image plane 806 will be parallel.
  • image plane 806 and the plane of QR code 804 may be parallel when projected onto the YZ plane as shown in FIGS. 8C and 8D. This may simplify the requirements to deduce the location of vehicle 802 and is also a realistic solution when placing QR codes for vehicle navigation in the real world. It is noted that in these embodiments, the pitch of the QR code and camera are both zero, which is expected due to the QR code being mounted on a vertical wall and the gimbal on the vehicle maintaining the camera level.
  • yaw may be determined by comparing the horizontal and vertical edge lengths of the contour bounded around the projection of QR code 804 onto the camera plane.
  • FIG. 8D shows the projection of QR code 804 onto image plane 806 can be represented by drawing line segments from the camera to the edges of QR code 804 and then drawing a line segment between where those segments intersect image plane 806.
  • FIG. 9 depicts a view parallel to an image plane 902 with an undistorted view 904 of a QR code at a 25-degree yaw angle and a representation of a projection onto image plane 902 illustrated by a contour 908.
  • coordinates (in pixels) of the comers of the QR code may be extracted from the image (e.g., via system 600 of FIG. 6) and used to determine an edge length of each side of the projection depicted by contour 908.
  • the closest vertical edge 906 may have the greatest length and serve as the reference for the true size of the QR code at some unknown distance.
  • the closest edge to the point of reference may be determined as it will have the greatest length (i.e., the smaller the edge the further the distance).
  • the closest edge to the point of reference may then be used as a ratio for the true width of the QR code in pixels as there should be no pitch or roll to distort the projection.
  • FIGS. 10A-10E depict various example geometries, which may be used to determine a location of a vehicle. It is noted that the embodiments described with reference to FIGS. 10A-10E are provided as example embodiments to determine a location of vehicle, and other embodiments may be used to determine a location of a vehicle.
  • a geometry 1000 of angles defining QR code yaw is depicted.
  • Angles shown in FIG. 10 are represented by a value of 90 degrees minus half the horizontal FOV (FOVH).
  • FOVQR I is the angle between the camera FOV edge and the edge of the QR code that is closest to the center of the camera FOV.
  • Angle FOVQR I may be determined by converting pixels to degrees as described above.
  • the Law of Sines may then be used to derive the yaw from the three known variables: angle A, a width of the projected QR code in pixels, and a representative true width in pixels of the QR code using the vertical edge pixel width.
  • FIG. 10B depicts a more detailed view of H depicted in FIG. 10 A.
  • the width of the QR code measured in pixels is represented by variable LQR, while LQRP is the projected width of the QR code in pixels, and angle A is determined by the relationships described above.
  • the sign of angle A may be determined by which side of center the QR code falls on and which vertical edge is nearest to the point of reference. Due to the numerous possible orientations of the QR code with respect to the camera’s FOV, deriving the yaw value is not always straightforward as the Law of Sines allows for as many as two solutions. As will be appreciated, there are multiple rules for determining the true yaw angle given any orientation of the QR code within the camera’s FOV.
  • the sign of the yaw angle can be broken down into rules by where the axis normal to the QR code’s surface and passing through its center intersects the axis normal to a vehicle’s image plane and passing through the center of the camera. If the QR code is located in the second or third quadrant (left hall) of the image plane and the two planes intersect in front of the QR code with respect to the vehicle, or if the QR code is in the first and fourth quadrants of the image plane (right side) and the planes intersect behind the QR code, the yaw may be considered negative. If the QR code is located in the first and fourth quadrants and the planes intersect in front of the QR code, the yaw may be considered positive. Likewise, if the QR code is located in the second or third quadrants of the image, the planes may intersect behind the QR code for the yaw to be positive. The sign may be maintained for accurate calculations of relative distances.
  • the distance (Lz) (see FIG. IOC) from POR 1002 to the QR code along the axis normal to the QR code’s surface may be derived as well as the horizontal distance (Lx) from the center of the QR code along the axis parallel to the QR code’s surface, as shown in FIG. IOC.
  • angles can be extracted from an image and used to solve the triangles needed to determine the relative distance of the vehicle from the QR code.
  • the number of pixels may be used to find the angle FOVQRR between the right edge of the camera FOV and the right most edge of the QR code (see FIG. 10D). This is different than FOVQRi defined above because this is always to the right edge of the QR code regardless if that is the inner or the outer edge.
  • Angle D in FIG. 10D can be derived by the following equation:
  • a triangle can be drawn using the three vertices defined by the horizontal edges of the QR code and one at POR 1002 of the camera, as shown in FIG. 10E.
  • Angle E may then be determined as the sum of the complimentary angle of angle D and the yaw angle.
  • the last unknown angle (Angle F) may then be determined.
  • Law of Sines may be applied to derive length LQRe as the QR code width LQR is known.
  • Measurements of interest are represented by side lengths LQRe, LZ, and LX- (1 ⁇ 2)LQR in FIG. IOC, showing the right triangle of interest.
  • angle G may be calculated, which is merely the complimentary angle of angle E.
  • a right-triangle is drawn to the right most edge of the QR code, therefore when the vehicle is to the left of the QR code, a negative value is returned from LQRe COS(G) and one half the QR code width is added, effectively subtracting one half the QR code width from the absolute value of the horizontal distance.
  • one half the QR code width is added to the absolute value of the horizontal distance. If the yaw angle is correct in magnitude but not in sign, the results of the horizontal calculations may be off by at least one half the QR code width and will also have the incorrect sign.
  • the location of vehicle 104 may be used in a control loop to maneuver vehicle 104 to the next location (e.g., proximate the next visual feature) along a route (e.g., route 105 of FIG. 1).
  • a linearized or non-linearized model of the vehicle’s position as a function of a Euler angle may be used.
  • a vehicle command allows for Euler angles (roll, pitch, and yaw) to be passed as arguments to the vehicle, as well as throttle to control the vehicle’s altitude.
  • the characterization of the model may be dependent on the control inputs that the vehicle supports.
  • a model for the roll and pitch axis response of the vehicle may be approximated using the vehicle’s internal inertial measurement unit and API reporting messages of the vehicle. The model may be used as the basis for the vehicle’s axis tilt stabilization controllers.
  • system identification may be used to determine a model for the vehicle’s response. Further, as will be appreciated, a model may be determined via preforming test runs on a vehicle and noting how the vehicle responds and/or a model may be developed, as described below
  • the model represents the change in axis tilt of the vehicle based on a vehicle command input and may be effective at modeling the behavior of the vehicle’s axis tilt response for pitch and roll.
  • Roll and pitch angles are determinant of the horizontal movements of the vehicle and may result in the changes in acceleration and velocity. Therefore, a mathematical model of the vehicle’s acceleration and velocity as a function of axis tilt may be used to estimate its horizontal position.
  • the velocity and acceleration of the vehicle may be calculated. Acceleration of the vehicle on each axis can be modeled by the following equation: thrust— drag
  • Equation (14) is an effective drag equation used for general body drag of a quadcopter assuming the quadcopter is treated as the rectangular volume that encloses all of the components of the vehicle except for the rotors:
  • F d body ⁇ rV a C D A, (14) wherein r is the density of the air, v a is the wind velocity, CD is the drag coefficient, and A is the projected surface area of the quadcopter calculated by computing the projection of the quadcopter volume onto the 2D plane orthogonal to the direction of va.
  • a 3D model of the vehicle may be used and rough bounding rectangles may be drawn to calculate the surface area normal to each of the X and Y axes.
  • the projected surface area normal to the direction of movement may then be calculated as:
  • the thrust along the Z axis (upward) is equal to the acceleration due to gravity times the mass of the vehicle.
  • the thrust along the horizontal axis may be calculated.
  • relative velocity and position of the vehicle may be calculated via integration, as shown above.
  • the behavior of the vehicle along each axis is relatively uncoupled, allowing for each axis to be modeled independently of the rest.
  • the movement assumed along one axis does not affect the movement of the vehicle along any other axis. While this is not always true, the interference is negligible for the constraints of a modeling application.
  • FIG. 11 depicts an example model for vehicle control, according to various embodiments of the present disclosure. More specifically, FIG. 11 depicts an example model 1100 of axes of a vehicle (e.g., vehicle 104 of FIG. 1).
  • a desired location 1102 of a vehicle is provided to a comparison node 1104 of model 1100.
  • Comparison node 1104 also receive a global location of the vehicle and generates a location error, which is received at a conversion block 1106 of model 1100.
  • a rotation matrix may be applied to the relative x and y displacements of the vehicle before computing the feedback error for axis controllers 1104.
  • a location error in global coordinates is converted to relative coordinates (e.g., because axis 1108 controllers may use relative coordinates for correct control).
  • This conversion uses the rotation matrix in the following equations. cosxp —simp
  • a conversion from relative coordinates to global coordinates is performed. In some applications, only the conversion from global to relative may be necessary as the sensor feedback may inherently report in global coordinates. lobal] — L y relative
  • inputs received at the vehicle may be parsed and received at models 1112.
  • input commands i.e., received at a vehicle
  • values conveyed to the vehicle may be a percentage of a maximum value (e.g., a maximum value stored in the drone’s firmware).
  • values may include a percentage of maximum tilt angle.
  • Yaw may refer to the percentage of maximum angular rate
  • throttle may refer to the percentage of maximum velocity with positive values accelerating the vehicle upward, negative values accelerating the drone downward, and a zero value maintains the altitude of the vehicle.
  • any output from a controller and/or a processor that serves as an input into vehicle command may be clipped and rounded to integer format, which may cause significant nonlinearization of the overall model.
  • a throttle value in vehicle command (sometimes referred to as Gaz) represents a vertical velocity in meters per second, therefore, to calculate position, only a single integration may be needed.
  • Yaw represents an angular rate in radians per second.
  • yaw and throttle responses may be controlled using only a proportional coefficient of a proportional-integral-derivative (PID) controller.
  • PID proportional-integral-derivative
  • Model 1100 further includes a vehicle response movement block 1116 (i.e., representing movement of the vehicle in response to one or more commands), a video streaming block 1118 for capturing images and identifying codes, and image processing block 1120 to determine a location of the vehicle based on one or more identified codes.
  • vehicle response movement block 1116 i.e., representing movement of the vehicle in response to one or more commands
  • video streaming block 1118 for capturing images and identifying codes
  • image processing block 1120 to determine a location of the vehicle based on one or more identified codes.
  • various embodiments include identifying and using codes (e.g., QR Codes) for location identification and navigation of vehicles.
  • codes e.g., QR Codes
  • one or more codes of a single input image i.e., as captured by a vehicle
  • each subimage includes a region that includes a code.
  • FIG. 12A illustrates an image 1200 including a number of QR codes 1202, 1204, and 1206.
  • a binary threshold may be applied to pixels of image 1200 to set all low lightness pixels to a specific value (e.g. 0 (black)) while the remainder of the pixels remain at their original value.
  • image 1200 may be processed via a series of filters at different kernel sizes (e.g., for different threshold and kernel indices) to generate a number of output results (e.g., twelve output results for combinations of threshold indices ranging from 1 to 4 and a kernel indices ranging from 1 to 3).
  • a code 1212 of FIG. 12B and a code 1222 of FIG. 12C correspond to code 1202 of FIG. 12 A
  • a code 1214 of FIG. 12B and a code 1224 of FIG. 12C correspond to code 1204 of FIG. 12 A
  • a code 1216 of FIG. 12B and a code 1226 of FIG. 12C correspond to code 1206 of FIG. 12 A.
  • a shape detection process may be performed on each output result (e.g.., a contours of each output result may analyzed) to detect and identify potential regions of interest (ROI) at least partially around each depicted code, as shown in FIGS. 12D and 12E.
  • a code 1232 of FIG. 12D and a code 1242 of FIG. 12E correspond to code 1202 of FIG. 12A
  • a code 1234 of FIG. 12D and a code 1244 of FIG. 12E correspond to code 1204 of FIG. 12A
  • a code 1236 of FIG. 12E and a code 1246 of FIG. 12E correspond to code 1206 of FIG. 12 A.
  • pixels of the region may be designated as forbidden and may not be processed again (i.e. to avoid multiple detections).
  • the potential ROIs for each result may be are added cumulatively to generate a ROI heatmap result 1250 shown in FIG. 12F.
  • a threshold may be applied to heatmap result 1250 such that potential code areas (i.e., areas determined to include codes) may be set to a value (e.g., 1 (white)), and all other pixels may be set to another value (e.g.
  • a detection process may be applied to image 1200 to generate bounding boxes 1262, 1264, and 1266, as shown in FIG. 12H.
  • regions determined to include codes may have comer points, and possibly a bounding box, (i.e., assigned based on width and height) that determine a region of pixels that make up a subimage containing only an associated ROI.
  • bounding boxes may be enlarged by a factor (e.g., of 1.1) in each direction.
  • a position and rotation of a camera i.e., of a vehicle
  • a position and rotation of a camera i.e., of a vehicle
  • pixel locations of comers of the code may be identified, data of the code may be accessed, and a rectified image of the code may be generated.
  • the pixel locations, code data, and the rectified image may be used to determine a position and yaw rotation of the vehicle relative to the code. It is noted that the roll and pitch may not be captured (i.e., because the vehicle’s camera may be automatically leveled).
  • a control system of the vehicle may then translate and perform yaw rotations to move to a desired position and orientation relative to the code.
  • pixel locations of the comers of the code may be ordered (e.g., as top-left, top-right, bottom-right, and bottom-left).
  • Ahomography matrix may then be generated to define a coordinate system relative to the code. From the homography matrix, the scale of the code may be determined and used to determine the size of the code relative to a field of view 1300 as shown in FIG. 13.
  • a size of a code 1302 relative to field of view 1300 may be used to determine a distance from code 1302 (i.e., because the size of the physical dimensions of code 1302 is known).
  • a vector 1301 in a polar coordinate system may be determined, as shown in FIG. 13.
  • Vector 1301 denotes the location of code 1302 relative to a camera 1304 of a vehicle.
  • Vector 1301 may be determined from a position of code 1302 relative to field of view 1300 and the size of code 1302 in field of view 1300. A length of vector 1301 may be determined (i.e., because the size of the code is known a priori).
  • the translation (displacement of camera 1304 relative to the origin of code 1302) may be determined from the polar coordinates of vector 1301. Further, the polar coordinates may be transformed to Cartesian to determine x, y, and z translation of code 1302 relative to the vehicle.
  • a vehicle may include an onboard inertial measurement unit that provides roll, pitch, and yaw. Further, the vehicle may include a camera configured to maintain level orientation by zeroing out pitch and roll corrections from the vehicle. In some embodiments, the vehicle may only need four (4) degrees of freedom from the code (i.e., x, y, z translations, and a yaw rotation describing the alignment of the vehicle relative to the normal vector of the code). A vehicle may maneuver by thrust vectoring its motors, and the vehicle strafes laterally by rolling and translates forward and backward by pitching.
  • FIG. 14A is a flowchart of an example flow 1400 of detecting a code, in accordance with various embodiments of the disclosure.
  • Flow 1400 may be arranged in accordance with at least one embodiment described in the present disclosure.
  • Flow 1400 may be performed, in some embodiments, by a device or system, such as system 200 of FIG. 2, control loop 400 of FIG. 4, system 1700 of FIG. 17, or another device or system.
  • a code detection operation may be performed on an image 1402 at block 1404. At block 1406, it may be determined whether a code is detected. If a code is detected, flow 1400 may proceed to block 1408 where code processing may be performed. If a code is not detected, flow 1400 may proceed to block 1410.
  • a code ROI detection process may be performed to identify a number of ROIs. Further, for each identified ROI 1412, a code detection operation may be performed at block 1414. At block 1416, it may be determined whether a code is detected. If a code is detected, flow 1400 may proceed to block 1418 where code processing may be performed. If a code is not detected, flow 1400 may proceed to block 1419 wherein a “null” may be generated.
  • code processing may be performed.
  • a template is rotated at block 1424 to determine comer points 1426 of the code.
  • homology may be calculated, and at block 1430, translations and rotations may be calculated.
  • flow 1400 and/or flow 1420 may be implemented in differing order.
  • the outlined operations and actions are only provided as examples, and some of the operations and actions may be optional, combined into fewer operations and actions, or expanded into additional operations and actions without detracting from the essence of the disclosed embodiment.
  • the camera of the vehicle may be treated as the vehicle’s position.
  • the vehicle’s position may be controlled (e.g., via a controller) to adjust pitch, yaw, and/or Gaz to maintain the vehicle in a stable flight envelope.
  • a rotation value that describes the “roll” of the code may be computed.
  • a homography shear value that measures the yaw of the code (when the code is not rolled relative to the camera) may also be computed.
  • the rotation and shear components are not independent and through trigonometric identities describe the pitch, yaw, and roll of the code in space. While this mathematically precise determination of yaw is obtainable through the homography matrix, it may be simpler and faster from a practical control standpoint to quantify yaw as the relative height of one side of the code (e.g., the left side of the code) to an opposite side of the code (e.g., the right side of the code). right height
  • This estimate may yield zero (0) when the vehicle’s camera is aligned with the normal vector of the code.
  • the estimate may yield a positive value
  • the estimate may yield a negative value.
  • a closed loop system may be used for controlling a vehicle.
  • the closed loop system may be configured such that vehicle movements may be determined based on a live feed of coordinates.
  • a vehicle controller may output “-limit” if the vehicle is past the desired location, “0” if the vehicle is at the desired location, or “limit” if he vehicle is not yet at the desired location.
  • FIG. 15 depicts a control loop 1500, according to various embodiments of the present disclosure.
  • Control loop 1500 which illustrates a feedback configuration for each of the single-input-single-outputs representing the defined axes (i.e., roll, pitch, yaw, Gaz), includes a summation node 1502, a controller 1504, vehicle dynamics 1506, and sensor dynamics (e.g., a code) 1508.
  • Controller 1504 may include, for example, a proportional- integral-derivative (PID) controller, one or more proportional-integral (PI) controllers, or a modified fifth-order loop-shaped controller. As shown in FIG. 15, a desired location u of a vehicle is provided to summation node 1502, and an actual location y of the vehicle is generated via vehicle dynamics 1506.
  • PID proportional- integral-derivative
  • PI proportional-integral
  • FIG. 16 is a flowchart of an example method 1600 of operating a navigation system, in accordance with various embodiments of the disclosure.
  • Method 1600 may be arranged in accordance with at least one embodiment described in the present disclosure.
  • Method 1600 may be performed, in some embodiments, by a device or system, such as system 200 of FIG. 2, control loop 400 of FIG. 4, system 600 of FIG. 6, model 1100 of FIG. 11, control loop 1500 of FIG. 15, system 1700 of FIG. 17, or another device or system.
  • a device or system such as system 200 of FIG. 2, control loop 400 of FIG. 4, system 600 of FIG. 6, model 1100 of FIG. 11, control loop 1500 of FIG. 15, system 1700 of FIG. 17, or another device or system.
  • a device or system such as system 200 of FIG. 2, control loop 400 of FIG. 4, system 600 of FIG. 6, model 1100 of FIG. 11, control loop 1500 of FIG. 15, system 1700 of FIG. 17, or another device or system.
  • FIG. 16 is a flowchart of an example method 1600 of operating a navigation system
  • Method 1600 may begin at block 1602, wherein an image may be received from an unmanned vehicle, and method 1600 may proceed to block 1604.
  • the image which may be captured via a camera of the unmanned vehicle (e.g., vehicle 104 of FIG.
  • At block 1604 at least one feature (e.g., a code, such as a QR code) within the image may be detected, and method 1600 may proceed to block 1606.
  • computer vision module 206 of FIG. 2 which may include system 600 of FIG. 6, may detect the at least one feature (e.g., a QR code) within the image.
  • one or more embodiments described with reference to FIGS. 12A-12H may be used to detect the at least one feature.
  • each feature of the at least one feature may include a known size, a known shape, a known color, a known pattern, or any combination thereof.
  • each feature of the at least one feature may be inserted within an environment (e.g., at desired locations).
  • a location of the vehicle may be determined based on the at least one feature, and method 1600 may proceed to block 1608. More specifically, for example, the location of the vehicle may be determined based on a position of the at least one feature and a position of the vehicle relative to the code. For example, the location of the vehicle may be determined via location module 208 of FIG. 2 and/or one or more of the embodiments described with reference to FIGS. 8A-10E. As another example, one or more embodiments described with reference to FIGS. 13, 14A, and 14B may be used to determine the location of the vehicle. At block 1608, one or more commands may be conveyed to the vehicle based on the location of the vehicle.
  • one or more commands to control the vehicle may be conveyed to the vehicle.
  • one or more commands instructing the vehicle to perform one or more tasks may conveyed to the vehicle.
  • one or more of the embodiments described with reference to FIGS. 11 and 15 may be used to control the vehicle via one or more commands.
  • method 1600 may include one or more acts wherein the code may be decoded (e.g., to determine a location of the code).
  • method 1600 may include one or more acts wherein a bounding box may be generated around the code.
  • FIG. 17 is a block diagram of an example system 1700, which may be configured according to at least one embodiment described in the present disclosure.
  • system 1700 may include a processor 1702, a memory 1704, a data storage 1706, and a communication unit 1708.
  • processor 1702 may include a processor 1702, a memory 1704, a data storage 1706, and a communication unit 1708.
  • main module 202, vehicle module 204, location module 208 and control module 210 of FIG. 2, control loop 400 of FIG. 4, and vehicle 304 of FIG. 3, or parts thereof, may be or include an instance of system 1700.
  • processor 1702 may include any suitable special-purpose or general- purpose computer, computing entity, or processing device including various computer hardware or software modules and may be configured to execute instructions stored on any applicable computer-readable storage media.
  • processor 1702 may include a microprocessor, a microcontroller, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a Field-Programmable Gate Array (FPGA), or any other digital or analog circuitry configured to interpret and/or to execute program instructions and/or to process data.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA Field-Programmable Gate Array
  • processor 1702 may interpret and/or execute program instructions and/or process data stored in memory 1704, data storage 1706, or memory 1704 and data storage 1706. In some embodiments, processor 1702 may fetch program instructions from data storage 1706 and load the program instructions in memory 1704. After the program instructions are loaded into memory 1704, processor 1702 may execute the program instructions, such as instructions to perform one or more operations described in the present disclosure.
  • Memory 1704 and data storage 1706 may include computer-readable storage media or one or more computer-readable storage mediums for carrying or having computer- executable instructions or data structures stored thereon.
  • Such computer-readable storage media may be any available media that may be accessed by a general-purpose or special- purpose computer, such as processor 1702.
  • Such computer-readable storage media may include non-transitory computer-readable storage media including Random Access Memory (RAM), Read-Only Memory (ROM),
  • Computer-executable instructions may include, for example, instructions and data configured to cause processor 1702 to perform a certain operation or group of operations e.g., related to embodiments disclosed herein.
  • Communication unit 1708 may be configured to provide for communications with other devices e.g., through RF transmissions.
  • communication unit 1708 may be configured to transmit to and receive signals from an unmanned vehicle.
  • Communication unit 1708 may include suitable components for communications including, as non-limiting examples, a radio, one or more antennas, one or more encoders and decoders, and/or a power supply.
  • Embodiment 1 a navigation system, comprising: one or more processors configured to communicatively couple with an unmanned vehicle within an environment, the one or more processors further configured to: receive an image from the unmanned vehicle; detect one or more codes depicted within the image; determine a location of the unmanned vehicle based on the one or more codes; and convey one or more commands to the unmanned vehicle based on the location of the unmanned vehicle.
  • Embodiment 2 the device of Embodiment 1, wherein the code is sought within the region surrounding the last found code region.
  • Embodiment 3 the device of any of Embodiments 1 and 2, wherein the view/distortion and location of the code is the image is used to estimate the relative distance and angles of the vehicle with respect to the code.
  • Embodiment 4 the device of any of Embodiments 1 to 3, wherein the angles and distance of the drone with respect to the code is calculated by means of geometrical projections of the code with respect to the vehicle various planes and comparing the projections with the pixel-based dimensions of the code in the image.
  • Embodiment 5 the device of any of Embodiments 1 to 4, wherein the angles and distance of the drone with respect to the code is calculated by means of comparing the ratio of the code dimensions and the associated angles to virtually or experimentally generated dimensions and angles.
  • Embodiment 6 the device of any of Embodiments 1 to 5, wherein the location and/or navigation information is configured into and extracted from the code data that are decoded by the vehicle.
  • Embodiment 7 the device of any of Embodiments 1 to 6, wherein the code is identified by a visual feature and is mapped to the location and/or navigation information and instructions by a mapping table.
  • Embodiment 8 the device of any of Embodiments 1 to 7, wherein the code contains route update information to update a mapping table.
  • Embodiment 9 the device of any of Embodiments 1 to 8, wherein the code contains detour information that are used for a certain period of time.
  • Embodiment 10 the device of any of Embodiments 1 to 9, wherein the vehicles is considered in a desired location within an allowable location tolerance after a certain number frames are analyzed to determine that the vehicle is within the tolerance.
  • Embodiment 10 a method, comprising: receiving an image from a vehicle positioned within an environment; detecting at least one code within the image; determining a location of the vehicle based on the at least one code; and conveying one or more commands to the vehicle based on the location of the vehicle.
  • Embodiment 11 the method of Embodiment 10, wherein the command is based on observing the vehicle movement to various command to determine what is the suitable thrust and angle will result in the desired location change.
  • Embodiment 12 the method of any of Embodiments 10 and 11, wherein the observations of vehicle performance is used to update the magnitude of each command as the vehicle is flown.
  • Embodiment 13 the method of any of Embodiments 10 to 12, further comprising applying at least one filter to the image.
  • Embodiment 14 the method of any of Embodiments 10 to 13, further comprising combing filtered images to determine potential regions in a heatmap.
  • Embodiment 15 the method of any of Embodiments 10 to 14, further comprising positioned codes with known sizes within the environment. While the present disclosure has been described herein with respect to certain illustrated embodiments, those of ordinary skill in the art will recognize and appreciate that it is not so limited. Rather, many additions, deletions, and modifications to the illustrated embodiments may be made without departing from the scope of the invention as hereinafter claimed, including legal equivalents thereof. In addition, features from one embodiment may be combined with features of another embodiment while still being encompassed within the scope of the invention. Further, embodiments of the disclosure have utility with different and various detector types and configurations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

Divers modes de réalisation de la présente invention concernent la navigation de véhicules sans pilote. Un système de navigation peut comprendre un ou plusieurs processeurs configurés pour être couplés en communication à un véhicule sans pilote. Le ou les processeurs peuvent être configurés pour recevoir une image en provenance du véhicule sans pilote et détecter un élément dans l'image. Le ou les processeurs peuvent en outre être configurés pour déterminer un emplacement du véhicule sans pilote sur la base de l'élément et transmettre une ou plusieurs instructions au véhicule sans pilote sur la base de l'emplacement du véhicule sans pilote. L'invention concerne également des procédés et un support lisible par ordinateur associés.
PCT/US2020/060473 2019-11-13 2020-11-13 Navigation de véhicule sans pilote, et procédés, systèmes et support lisible par ordinateur associés WO2021141666A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/755,878 US20220383541A1 (en) 2019-11-13 2020-11-13 Unmanned vehicle navigation, and associated methods, systems, and computer-readable medium

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201962934976P 2019-11-13 2019-11-13
US62/934,976 2019-11-13
US202063090645P 2020-10-12 2020-10-12
US63/090,645 2020-10-12

Publications (2)

Publication Number Publication Date
WO2021141666A2 true WO2021141666A2 (fr) 2021-07-15
WO2021141666A3 WO2021141666A3 (fr) 2021-08-19

Family

ID=76788164

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/060473 WO2021141666A2 (fr) 2019-11-13 2020-11-13 Navigation de véhicule sans pilote, et procédés, systèmes et support lisible par ordinateur associés

Country Status (1)

Country Link
WO (1) WO2021141666A2 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115793715A (zh) * 2023-01-05 2023-03-14 雄安雄创数字技术有限公司 一种无人机辅助飞行方法、系统、装置及存储介质
CN117930869A (zh) * 2024-03-21 2024-04-26 山东智航智能装备有限公司 一种基于视觉的飞行装置着陆方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9463574B2 (en) * 2012-03-01 2016-10-11 Irobot Corporation Mobile inspection robot
EP3158293B1 (fr) * 2015-05-23 2019-01-23 SZ DJI Technology Co., Ltd. Fusion de capteurs à l'aide de capteurs inertiels et d'images
GB2568286B (en) * 2017-11-10 2020-06-10 Horiba Mira Ltd Method of computer vision based localisation and navigation and system for performing the same

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115793715A (zh) * 2023-01-05 2023-03-14 雄安雄创数字技术有限公司 一种无人机辅助飞行方法、系统、装置及存储介质
CN115793715B (zh) * 2023-01-05 2023-04-28 雄安雄创数字技术有限公司 一种无人机辅助飞行方法、系统、装置及存储介质
CN117930869A (zh) * 2024-03-21 2024-04-26 山东智航智能装备有限公司 一种基于视觉的飞行装置着陆方法及装置

Also Published As

Publication number Publication date
WO2021141666A3 (fr) 2021-08-19

Similar Documents

Publication Publication Date Title
CN111527463B (zh) 用于多目标跟踪的方法和系统
Forster et al. Continuous on-board monocular-vision-based elevation mapping applied to autonomous landing of micro aerial vehicles
Yang et al. An onboard monocular vision system for autonomous takeoff, hovering and landing of a micro aerial vehicle
Courbon et al. Vision-based navigation of unmanned aerial vehicles
US20200357141A1 (en) Systems and methods for calibrating an optical system of a movable object
Eynard et al. Real time UAV altitude, attitude and motion estimation from hybrid stereovision
Clark et al. Autonomous and scalable control for remote inspection with multiple aerial vehicles
Deng et al. Global optical flow-based estimation of velocity for multicopters using monocular vision in GPS-denied environments
Vetrella et al. RGB-D camera-based quadrotor navigation in GPS-denied and low light environments using known 3D markers
CN114004977A (zh) 一种基于深度学习的航拍数据目标定位方法及系统
WO2021141666A2 (fr) Navigation de véhicule sans pilote, et procédés, systèmes et support lisible par ordinateur associés
Natraj et al. Vision based attitude and altitude estimation for UAVs in dark environments
Han et al. Geolocation of multiple targets from airborne video without terrain data
Zarei et al. Indoor UAV object detection algorithms on three processors: implementation test and comparison
Abdulov et al. Visual odometry approaches to autonomous navigation for multicopter model in virtual indoor environment
Gabdullin et al. Analysis of onboard sensor-based odometry for a quadrotor uav in outdoor environment
US20220383541A1 (en) Unmanned vehicle navigation, and associated methods, systems, and computer-readable medium
Al-Kaff Vision-based navigation system for unmanned aerial vehicles
Tehrani et al. Low-altitude horizon-based aircraft attitude estimation using UV-filtered panoramic images and optic flow
Podhradsky Visual servoing for a quadcopter flight control
Andert et al. Combined grid and feature-based occupancy map building in large outdoor environments
Petrlık Onboard localization of an unmanned aerial vehicle in an unknown environment
Climent-Pérez et al. Telemetry-based search window correction for airborne tracking
Ozaslan Estimation, Mapping and Navigation with Micro Aerial Vehicles for Infrastructure Inspection
Ramirez-Torres et al. Real-time reconstruction of heightmaps from images taken with an UAV

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20911670

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20911670

Country of ref document: EP

Kind code of ref document: A2