WO2022077829A1 - Large scope point cloud data generation and optimization - Google Patents

Large scope point cloud data generation and optimization Download PDF

Info

Publication number
WO2022077829A1
WO2022077829A1 PCT/CN2021/077854 CN2021077854W WO2022077829A1 WO 2022077829 A1 WO2022077829 A1 WO 2022077829A1 CN 2021077854 W CN2021077854 W CN 2021077854W WO 2022077829 A1 WO2022077829 A1 WO 2022077829A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
mapping data
mobile device
movable object
payload
Prior art date
Application number
PCT/CN2021/077854
Other languages
French (fr)
Inventor
Kalyani Premji NIRMAL
Alain PIMENTEL
Comran MORSHED
Original Assignee
SZ DJI Technology Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co., Ltd. filed Critical SZ DJI Technology Co., Ltd.
Publication of WO2022077829A1 publication Critical patent/WO2022077829A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/51Display arrangements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Definitions

  • the disclosed embodiments relate generally to techniques for mapping and more particularly, but not exclusively, to techniques for real-time visualization of mapping data on a mobile device in a movable object environment.
  • Movable objects such as unmanned aerial vehicles (UAVs) can be used for performing surveillance, reconnaissance, and exploration tasks for various applications.
  • Movable objects may carry a payload, including various sensors, which enables the movable object to capture sensor data during movement of the movable objects.
  • the captured sensor data may be rendered on a mobile device, such as a mobile device in communication with the movable object via a remote control, remote server, or other computing device.
  • a method of real-time visualization of mapping data may include obtaining mapping data from a scanning sensor, storing the mapping data in one or more dynamic buffers in a memory of a mobile device, the dynamic buffers comprising non-contiguous portions of the memory, generating a real-time visualization of the mapping data stored in the dynamic buffers, and rendering the real-time visualization of the mapping data on a display coupled to the mobile device.
  • FIG. 1 illustrates an example of a movable object in a movable object environment, in accordance with various embodiments.
  • FIG. 2 illustrates an example of a movable object architecture in a movable object environment, in accordance with various embodiments.
  • FIG. 3 illustrates an example of mobile device and payload architectures, in accordance with various embodiments.
  • FIG. 4 illustrates an example of an adapter apparatus in a movable object environment, in accordance with various embodiments.
  • FIG. 5 illustrates an example of generating a real-time visualization of mapping data by a mobile device, in accordance with various embodiments.
  • FIGS. 6A and 6B illustrate data processing based on a circular buffer in a data processing system, in accordance with various embodiments.
  • FIG. 7 illustrates an example of a split screen user interface, in accordance with various embodiments.
  • FIG. 8 illustrates an example of user interface, in accordance with various embodiments.
  • FIG. 9 illustrates an example of overlaying information in mapping data, in accordance with various embodiments.
  • FIG. 10 illustrates an example of supporting a movable object interface in a software development environment, in accordance with various embodiments.
  • FIG. 11 illustrates an example of a movable object interface, in accordance with various embodiments.
  • FIG. 12 illustrates an example of components for a movable object in a software development kit (SDK) , in accordance with various embodiments.
  • SDK software development kit
  • FIG. 13 shows a flowchart of a method of mapping using a payload in a movable object environment, in accordance with various embodiments.
  • UAV unmanned aerial vehicle
  • LiDAR sensors can be used to generate very accurate maps of a target environment.
  • LiDAR sensors produce a significant amount of data that is generally not readily viewed by a person right out of the box. Instead, significant configuration of the LiDAR sensor, along with additional sensors such as positioning sensors, along with significant post-processing of the collected data is needed to yield a map that can be usefully interpreted and/or used by a human for various applications.
  • a LiDAR sensor may collect mapping data relative to the LiDAR sensor, and requires a highly accurate inertial navigation system to generate mapping data that can be transformed into a useful coordinate system (e.g., such as a global coordinate system) .
  • a useful coordinate system e.g., such as a global coordinate system
  • mapping data is not readily human-interpretable in conventional systems the way traditional image data is, the human operator of a mapping system cannot readily identify any areas of the target environment that have not been mapped, or have been incompletely mapped. Instead, the operator of these conventional systems must wait for the data to be post-processed which may take hours or days. Then upon discovering that the mapping data is incomplete, the operator must perform additional mapping missions in the target environment to attempt to complete collection of the mapping data. This process may potentially iterate several times depending on the skill of the operator before the mapping is complete.
  • Embodiments enable a movable object to map a target environment using a payload that comprises a plurality of sensors.
  • the payload can include a scanning sensor, one or more cameras, and an inertial navigation system (INS) .
  • INS inertial navigation system
  • This payload can be connected to a UAV through a single port which provides a mechanical mounting point as well as managing power and data communication with the payload.
  • the mapping data collected by the payload can be provided to a mobile device, such as a smartphone, tablet, or other mobile device and visualized in real-time.
  • the visualization may be generated and displayed to the operator within a scale of milliseconds (e.g., within 100 milliseconds, or between 10 milliseconds to 1 second) of the mapping data being received and stored by the mobile device.
  • the mobile device can include a real-time visualization application which manages the memory and computing resources of the mobile device to facilitate the real-time visualization generation.
  • the real-time visualization application provides both a visualization interface and a control interface through which the operator can control the payload and/or the UAV. For example, when the real-time visualization application displays the current mapping data, the operator can interact with the visualization using the mobile device, such as through a plurality of gesture-based inputs.
  • the real-time visualization application can simultaneously show a camera view synchronized to a point cloud view. For example, video data from a visible light camera of the payload and a real-time point cloud visualization from a LiDAR sensor of the payload can be displayed simultaneously, providing the operator with a more complete view of the target environment.
  • the operator can choose to view the synchronized view, or a single view of either the real time point cloud visualization or a real time camera view.
  • the user may interact with the real-time visualization via a user interface, such as through the plurality of gesture-based inputs.
  • instructions or commands may be sent to the UAV or the payload, such that the poses of the UAV or the payload may be adjusted to change the point cloud view and/or the camera view in response to the user gesture-based inputs in real-time.
  • FIG. 1 illustrates an example of a movable object in a movable object environment 100, in accordance with various embodiments.
  • mobile device 110 in a movable object environment 100 can communicate with a movable object 104 via a communication link 106.
  • the movable object 104 can be an unmanned aircraft, an unmanned vehicle, a handheld device, and/or a robot.
  • the mobile device 110 can be a portable personal computing device, a smart phone, a remote control, a wearable computer, a virtual reality/augmented reality system, and/or a personal computer.
  • the mobile device 110 can include a remote control 111 and communication system 120A, which is responsible for handling the communication between the mobile device 110 and the movable object 104 via communication system 120B.
  • the communication between the mobile device 110 and the movable object 104 can include uplink and downlink communication.
  • the uplink communication can be used for transmitting control signals or commands
  • the downlink communication can be used for transmitting media or video stream, mapping data collected by scanning sensors, or other sensor data collected by other sensors.
  • the communication link 106 can be (part of) a network, which is based on various wireless technologies, such as the WiFi, Bluetooth, 3G/4G, and other radio frequency technologies. Furthermore, the communication link 106 can be based on other computer network technologies, such as the internet technology, or any other wired or wireless networking technology. In some embodiments, the communication link 106 may be a non-network technology, including direct point-to-point connections such as universal serial bus (USB) or universal asynchronous receiver-transmitter (UART) .
  • USB universal serial bus
  • UART universal asynchronous receiver-transmitter
  • movable object 104 in a movable object environment 100 can include an adapter apparatus 122 and a payload 124, such as a scanning sensor (e.g., a LiDAR sensor) , camera (s) , and/or a collection of sensors in a single payload unit.
  • the adapter apparatus 122 includes a port for coupling the payload 124 to the movable object 104 which provides power, data communications, and structural support for the payload 124.
  • the movable object 104 is described generally as an aircraft, this is not intended to be limiting, and any suitable type of movable object can be used.
  • any of the embodiments described herein in the context of aircraft systems can be applied to any suitable movable object (e.g., a UAV) .
  • the payload 124 may be provided on the movable object 104 without requiring the adapter apparatus 122.
  • the movable object 104 may include one or more movement mechanisms 116 (e.g., propulsion mechanisms) , a sensing system 118, and a communication system 120B.
  • the movement mechanisms 116 can include one or more of rotors, propellers, blades, engines, motors, wheels, axles, magnets, nozzles, animals, or human beings.
  • the movable object may have one or more propulsion mechanisms.
  • the movement mechanisms may all be of the same type. Alternatively, the movement mechanisms can be different types of movement mechanisms.
  • the movement mechanisms 116 can be mounted on the movable object 104 (or vice-versa) , using any suitable means such as a support element (e.g., a drive shaft) .
  • the movement mechanisms 116 can be mounted on any suitable portion of the movable object 104, such on the top, bottom, front, back, sides, or suitable combinations thereof.
  • the movement mechanisms 116 can enable the movable object 104 to take off vertically from a surface or land vertically on a surface without requiring any horizontal movement of the movable object 104 (e.g., without traveling down a runway) .
  • the movement mechanisms 116 can be operable to permit the movable object 104 to hover in the air at a specified position and/or orientation.
  • One or more of the movement mechanisms 116 may be controlled independently of the other movement mechanisms, for example by a real-time visualization application 128 executing on mobile device 110 or other computing device in communication with the movement mechanisms.
  • the movement mechanisms 116 can be configured to be controlled simultaneously.
  • the movable object 104 can have multiple horizontally oriented rotors that can provide lift and/or thrust to the movable object.
  • the multiple horizontally oriented rotors can be actuated to provide vertical takeoff, vertical landing, and hovering capabilities to the movable object 104.
  • one or more of the horizontally oriented rotors may spin in a clockwise direction, while one or more of the horizontally oriented rotors may spin in a counterclockwise direction.
  • the number of clockwise rotors may be equal to the number of counterclockwise rotors.
  • each of the horizontally oriented rotors can be varied independently in order to control the lift and/or thrust produced by each rotor, and thereby adjust the spatial disposition, velocity, and/or acceleration of the movable object 104 (e.g., with respect to up to three degrees of translation and up to three degrees of rotation) .
  • a controller such as flight controller 114, can send movement commands to the movement mechanisms 116 to control the movement of movable object 104. These movement commands may be based on and/or derived from instructions received from mobile device 110 or other entity.
  • the sensing system 118 can include one or more sensors that may sense the spatial disposition, velocity, and/or acceleration of the movable object 104 (e.g., with respect to various degrees of translation and various degrees of rotation) .
  • the one or more sensors can include any of the sensors, including GPS sensors, real-time kinematic (RTK) sensors, motion sensors, inertial sensors, proximity sensors, or image sensors.
  • the sensing data provided by the sensing system 118 can be used to control the spatial disposition, velocity, and/or orientation of the movable object 104 (e.g., using a suitable processing unit and/or control module) .
  • the sensing system 118 can be used to provide data regarding the environment surrounding the movable object, such as weather conditions, proximity to potential obstacles, location of geographical features, location of manmade structures, and the like.
  • the communication system 120B enables communication with mobile device 110 via communication link 106, which may include various wired and/or wireless technologies as discussed above, and communication system 120A.
  • the communication system 120A or 120B may include any number of transmitters, receivers, and/or transceivers suitable for wireless communication.
  • the communication may be one-way communication, such that data can be transmitted in only one direction.
  • one-way communication may involve only the movable object 104 transmitting data to the mobile device 110, or vice-versa.
  • the data may be transmitted from one or more transmitters of the communication system 120B of the movable object 104 to one or more receivers of the communication system 120A of the mobile device 110, or vice-versa.
  • the communication may be two-way communication, such that data can be transmitted in both directions between the movable object 104 and the mobile device 110.
  • the two-way communication can involve transmitting data from one or more transmitters of the communication system 120B of the movable object 104 to one or more receivers of the communication system 120A of the mobile device 110, and transmitting data from one or more transmitters of the communication system 120A of the mobile device 110 to one or more receivers of the communication system 120B of the movable object 104.
  • a real-time visualization application 128 executing on mobile device 110 or other computing devices that are in communication with the movable object 104 can provide control data to one or more of the movable object 104, adapter apparatus 122, and payload 124 and receive information from one or more of the movable object 104, adapter apparatus 122, and payload 124 (e.g., position and/or motion information of the movable object, adapter apparatus or payload; data sensed by the payload such as image data captured by one or more payload cameras or mapping data captured by a payload LiDAR sensor; and data generated from image data captured by the payload camera or LiDAR data generated from mapping data captured by the payload LiDAR sensor) .
  • position and/or motion information of the movable object, adapter apparatus or payload e.g., position and/or motion information of the movable object, adapter apparatus or payload
  • data sensed by the payload such as image data captured by one or more payload cameras or mapping data captured by a payload
  • the mobile device comprises a touchscreen display; at least one processor; and a memory, the memory including instructions which, when executed by the at least one processor, cause the mobile device to: obtain mapping data from a scanning sensor; store the mapping data in one or more dynamic buffers in the memory, the dynamic buffers comprising non-contiguous portions of the memory; generate a real-time visualization of the mapping data stored in the dynamic buffers; and render the real-time visualization of the mapping data on a display coupled to the mobile device.
  • control data may result in a modification of the location and/or orientation of the movable object (e.g., via control of the movement mechanisms 116) , or a movement of the payload with respect to the movable object (e.g., via control of the adapter apparatus 122) .
  • the control data from the real-time visualization application 128 may result in control of the payload 124, such as control of the operation of a scanning sensor, a camera or other image capturing device (e.g., taking still or moving pictures, zooming in or out, turning on or off, switching imaging modes, changing image resolution, changing focus, changing depth of field, changing exposure time, changing viewing angle or field of view, adding or removing waypoints, etc. ) .
  • the communications from the movable object, adapter apparatus and/or payload may include information obtained from one or more sensors (e.g., of the sensing system 118 or of the payload 124 or other payload) and/or data generated based on the sensing information.
  • the communications may include sensed information obtained from one or more different types of sensors (e.g., GPS sensors, RTK sensors, motion sensors, inertial sensors, proximity sensors, or image sensors) .
  • Such information may pertain to the position (e.g., location, orientation) , movement, or acceleration of the movable object, adapter apparatus, and/or payload.
  • Such information from a payload may include data captured by the payload or a sensed state of the payload.
  • the movable object 104 and/or payload 124 can include one or more processors, such as CPUs, GPUs, field programmable gate arrays (FPGAs) , system on chip (SoC) , application-specific integrated circuit (ASIC) , or other processors and/or accelerators.
  • the payload may include various sensors integrated into a single payload, such as a LiDAR sensor, one or more cameras, an inertial navigation system, etc.
  • the payload can collect sensor data that is used to provide LiDAR-based mapping for various applications, such as construction, surveying, target inspection, etc.
  • sensor data may be obtained by real-time visualization application 128 from the payload 124, e.g., via connection 106 or through another connection between the mobile device 110 and the movable object 104 and/or payload 124.
  • the connection may be intermediated by one or more other networks and/or systems.
  • the movable object or payload may connect to a server in a cloud computing system, a satellite, or other communication system, which may provide the sensor data to the mobile device 110.
  • the real-time visualization application 128 When the real-time visualization application 128 receives the sensor data from the payload, it can store the data in a plurality of dynamic buffers (e.g., circular buffers comprising a plurality of buffer blocks) in the memory of the mobile device.
  • the dynamic buffers may include non-contiguous memory space in the mobile device. Additionally, blocks can be added or removed from the dynamic buffers, allowing the size of the dynamic buffers to be adjusted dynamically. Since a large amount of point cloud data may be divided into small blocks in different sizes and be stored non-contiguously to any empty buffer blocks, all empty space of the memory buffer may be filled and fully utilized. Therefore, the features provided in the disclosure including allocating point cloud data to buffer blocks with dynamic size and storing point cloud data non-contiguously may optimize memory management of the system and provide lower latency for rendering large scale point cloud data in real-time.
  • the real-time visualization application 128 can store point data in dynamic point buffers and color data in dynamic color buffers.
  • the sensor data may also be stored to disk on the mobile device 110. By storing to disk, the sensor data can be recovered if the data stored in memory fails, is corrupted, etc.
  • the real-time visualization application 128 can then generate a real-time visualization of the sensor data.
  • the sensor data may be received in batches that represent a particular amount of time. For example, each batch of sensor data may represent 300 milliseconds of scanning data or scanning data in any amount of time, such as in an amount of time of 10 milliseconds to 1 second.
  • the real-time visualization application 128 can retrieve all or a portion of the sensor data to generate the visualization.
  • the real-time visualization application 128 can additionally enable measurements of objects, distances, etc., in the point cloud visualization to be obtained.
  • the sensor data can include scanning data obtained from a LiDAR sensor or other sensor that provides high resolution scanning of a target environment, pose data indicating the attitude of the payload when the scanning data was obtained (e.g., from an inertial measurement unit) , and positioning data from a positioning sensor (e.g., a GPS module, RTK module, or other positioning sensor) , where the sensors providing the sensor data are all incorporated into a single payload 124.
  • various sensors incorporated into the single payload 124 can be pre-calibrated based on extrinsic and intrinsic parameters of the sensors and synchronized based on a reference clock signal shared among the various sensors.
  • the reference clock signal may be generated by time circuitry associated with one of the various sensors or a separate time circuitry connecting the various sensors.
  • the positioning data from the positioning sensor may be updated based on correction data received from a positioning sensor of the movable object 104 which may be included in functional modules 108, sensing system 118, or a separate module coupled to movable object 104 which provides positioning data for the movable object.
  • the scanning data can be geo-referenced using the positioning data and used to construct the map of the target environment.
  • FIG. 2 illustrates an example 200 of a movable object architecture in a movable object environment, in accordance with various embodiments.
  • a movable object 104 can include a flight controller 114 that communicates with payload 124 via adapter apparatus 122. Additionally, the flight controller can communicate with various functional modules 108 onboard the movable object. As discussed further below, the adapter apparatus 122 can facilitate communication between the flight controller and the payload via a high bandwidth connection, such as Ethernet or universal serial bus (USB) . The adapter apparatus 122 can further provide power to the payload 124.
  • a high bandwidth connection such as Ethernet or universal serial bus (USB)
  • the payload may include a plurality of sensors, including a scanning sensor 202, a monocular camera 204, an RGB camera 206, an inertial navigation system 208 which may include an inertial measurement unit 210 and a positioning sensor 212, one or more processors 214, and one or more storage devices 216.
  • scanning sensor 202 may include a LiDAR sensor.
  • the LiDAR sensor may provide high resolution scanning data of a target environment.
  • Various LiDAR sensors may be incorporated into the payload, having various characteristics.
  • the LiDAR sensor may have a field of view of approximately 70 degrees and may implement various scanning patterns, such as a seesaw pattern, an elliptical pattern, a petal pattern, etc.
  • a lower density LiDAR sensor can be used in the payload, as higher density point clouds require additional processing time.
  • the payload may implement its components (e.g., one or more processors, one or more cameras, the scanning sensor, INS, etc. ) on a single embedded board.
  • the payload may further provide thermal management for the components.
  • the payload may further include a greyscale monocular camera 204.
  • the monocular camera 204 may include a mechanical shutter that is synchronized with the inertial navigation system (INS) 208 such that when an image is captured by the monocular camera, the attitude of the payload at that moment is associated with the image data.
  • INS inertial navigation system
  • This enables visual features (walls, corners, points etc. ) to be extracted from image data captured by the monocular camera 204.
  • the visual features that are extracted can be associated with a pose-timestamp signature that is generated from the attitude data produced by the INS.
  • Using the pose-timestamped feature data visual features can be tracked from one frame to another, which enables a trajectory of the payload (and as a result, the movable object) to be generated.
  • the payload may further include an RGB camera 206.
  • the RGB camera can collect live image data that is streamed to the mobile device 110 while the movable object is in flight. For example, the user can select whether to view image data collected by one or more cameras of the movable object or the RGB camera of the payload through a user interface of the mobile device 110. Additionally, color data can be obtained from image data collected by the RGB camera and overlaid on the point cloud data collected by the scanning sensor. This provides improved visualizations of the point cloud data that more closely resemble the actual objects in the target environment being scanned.
  • the payload can further include an inertial navigation system 208.
  • the INS 208 can include an inertial measurement unit 210 and optionally a positioning sensor 212.
  • the IMU 210 provides the attitude of the payload which can be associated with the scanning data and image data captured by the scanning sensor and cameras, respectively.
  • the positioning sensor 212 may use a global navigation satellite service (GNSS) , such as GPS, GLOSNASS, Galileo, BeiDou, etc.
  • GNSS global navigation satellite service
  • the positioning data collected by the positioning sensor 212 may be enhanced using an RTK module 218 onboard the movable object to enhance positioning data collected by INS 208.
  • RTK information can be received wirelessly from one or more base stations.
  • the antenna of the RTK module 218 and the payload are separated by a fixed distance on the movable object, allowing the RTK data collected by the RTK module 218 to be transformed into the IMU frame of the payload.
  • the payload 124 may not include its own positioning sensor 212 and instead may rely on a positioning sensor and RTK module 218 of the movable object, e.g., included in functional modules 108.
  • positioning data may be obtained from the RTK module 218 of the movable object 104 and may be combined with the IMU data.
  • the positioning data obtained from the RTK module 218 can be transformed based on the known distance between the RTK antenna and the payload.
  • the payload can include one or more processors 214.
  • the one or more processors may include an embedded processor that includes a CPU and DSP as an accelerator. In some embodiments, other processors may be used such as GPUs, FPGAs, etc.
  • the processor 214 can geo-reference the scanning data using the INS data. In some embodiments, the geo-referenced scanning data is downsampled to a lower resolution before being sent to the mobile device 110 for visualization.
  • payload communication manager 230 can manage downsampling and other data settings for a plurality of mobile devices that connect to the payload.
  • different mobile devices may be associated with preference data maintained by the payload communication manager 230, where the preference data indicates how the mapping data is to be prepared and/or sent to that mobile device. For example, communication protocol settings, channel settings, encryption settings, transfer rate, downsampling settings, etc.
  • the processor (s) 214 can also manage storage of the sensor data to one or more storage devices 216.
  • the storage device (s) can include a secure digital (SD) card or other removable media, a solid-state drive (SSD) , an eMMC, and/or a memory.
  • the processor can also be used to perform visual inertial odometry (VIO) using the image data collected by the monocular camera 204. This may be performed in real-time to calculate the visual features which are then stored in a storable format (not necessarily as images) .
  • log data may be stored to an eMMC and debugging data can be stored to an SSD.
  • the processor (s) can include an encoder/decoder built in for processing image data captured by the RGB camera.
  • Flight controller 114 can send and receive data to and from the remote control via communication system 120B.
  • Flight controller 114 can connect to various functional modules 108, such as RTK module 218, IMU 220, barometer 222, or magnetometer 224.
  • communication system 120B can connect to other computing devices instead of, or in addition to, flight controller 114.
  • sensor data collected by the one or more functional modules 108 can be passed from the flight controller 114 to the payload 124.
  • the user can receive data from, and provide commands to, the UAV using a real-time visualization application 128 on mobile device 110.
  • the mobile device 110 can include a mobile communication manager 226 which facilitates communication with the movable object 104 and/or payload 124.
  • the mobile communication manager 226 can act as an abstraction layer between the payload 124 and the mobile device 110.
  • the mobile communication manager 226 can receive sensor data produced by the payload in a first format and provide it to the real-time visualization application 128 in a second format.
  • the mobile communication manager 226 can manage the specific interfaces provided by the payload and/or sensors of the payload (e.g., the particular methods required to request sensor data, subscribe to sensor data streams, etc. ) . This way, the real-time visualization application 128 does not have to be separately configured to receive data from every possible sensor of the payload. Additionally, if the sensors of the payload are changed, the real-time visualization application 128 does not have to be reconfigured to consume data from the new sensors.
  • the mobile device 110 can include a user interface manager 228. The user interface manager 228 can receive data from the mobile communication manager 226 and initiate workflows as needed to prepare the data for the real-time visualization application.
  • the user interface manager 228 can obtain commands from the real-time visualization application 128 (for example, to change the view of the visualization, move the payload and/or movable object, etc. ) and generate corresponding commands to be processed by the movable object, adapter apparatus, and/or payload.
  • FIG. 3 illustrates an example of mobile device and payload architectures, in accordance with various embodiments.
  • a payload 124 may be configured to communicate with a plurality of different mobile devices.
  • a payload communication manager 230 can support one or more payload communication protocols 300 to be used to communicate mapping data, command data, and/or telemetry data with one or more mobile devices. For example, point cloud data may be communicated using a first protocol while telemetry data or other command data may be communicated with a second protocol. Alternatively, a single protocol may be used to communicate all data exchanging between the payload 124 and the mobile device 110. Additionally, payload communication manager 230 can maintain mobile device preferences 302.
  • Payload communication manager 230 can use the mobile device preferences 302 to determine how and whether to prepare data for sending to a particular mobile device. For example, payload communication manager 230 can use the mobile device preferences 302 to manage communication protocol settings, channel settings, encryption settings, transfer rate, downsampling settings, etc. to facilitate communication with the particular mobile device 110 with which it is communicating.
  • computing resources e.g., processors, memory, disk space, etc.
  • Payload communication manager 230 can use the mobile device preferences 302 to determine how and whether to prepare data for sending to a particular mobile device. For example, payload communication manager 230 can use the mobile device preferences 302 to manage communication protocol settings, channel settings, encryption settings, transfer rate, downsampling settings, etc. to facilitate communication with the particular mobile device 110 with which it is communicating.
  • the mobile communication manager 226 can receive sensor data produced by the payload and make the sensor data available to a user interface manager 228.
  • the user interface manager 228 can invoke visualization manager 304.
  • Visualization manager 304 can process the mapping data and generate a real-time rendering of the mapping data.
  • the user interface manager 228 can obtain the real-time rendering of the mapping data and provide it to the real-time visualization application 128 to be rendered on a display of the mobile device 110.
  • Mapping data may be received throughout a mapping mission. As additional mapping data is received, the user interface manager 228 can pass the new mapping data to the visualization manager 304 which can process the new mapping data and update the real-time rendering of the mapping data.
  • the visualization manager 304 may use dynamic buffers (e.g., circular buffers comprising a plurality of buffer blocks) to optimize memory management and reduce computation load for caching and passing mapping data (e.g., point cloud data) to be rendered on the real-time visualization application 128 through the user interface manager 228.
  • the mapping data may be stored in non-contiguous buffer blocks of the dynamic buffers when new mapping data is received through a mapping mission. In circumstances where the empty buffer blocks of the dynamic buffers are all filled, new mapping data received through the mapping mission may overwrite old mapping data in a size of the buffer block, such that uninterrupted and real-time visualization may be achieved with substantially low latency.
  • the user can interact with the real-time visualization through the real-time visualization application 128.
  • the user may use gestures (e.g., finger gestures, eye gestures indicating eye movements, head gestures indicating head movement, or other gestures indicating a movement of the user’s body parts) to interact with the real-time visualization (e.g., a translation, rotation, tilt, zoom, etc. ) .
  • the user may also provide commands to be executed by the movable object, payload, and/or adapter apparatus based on the real-time visualization. For example, the user may start or stop recording of the mapping information.
  • the user may determine that the target area has not been mapped completely and the user may direct the movable object to the unmapped area (e.g., by instructing the movable object to change its position in the target environment and/or to change the direction in which the payload is oriented) .
  • These commands can be received by the real-time visualization application 128 based on user input to the mobile device 110 and the commands may be passed to the user interface manager 228.
  • the user interface manager 228 can translate the commands into movable object, payload, and/or adapter apparatus commands.
  • a movement commands may be converted from a user interface coordinate system into a movable object coordinate system, world coordinate system, or other coordinate system and used to generate commands that can be executed by a flight controller, the payload, adapter apparatus, etc. to move the movable object and/or payload appropriately.
  • the commands may be to change the view of the point cloud visualization.
  • the user interface manager 228 can obtain the commands from the real-time visualization application 128 and instruct the visualization manager 304 to update the visualization based on the commands.
  • user interface manager 228 can make a variety of user interface widgets available for the real-time visualization application.
  • the real-time visualization application 128 can include a Lidar Settings widget for changing the settings of the lidar like sampling rate, scan mode, echo mode etc. and a Point Cloud Record widget which triggers start, stop, pause and resume of the recording of the point cloud on the payload.
  • the collected point cloud data is written to a file on the payload’s storage system (e.g., fixed media, such as a hard disk drive, removable media, such as an SD card, etc. ) .
  • the real-time visualization application 128 can include a Point cloud checklist widget which indicates a current status of the payload and can indicate that data can be recorded correctly, and all the systems are ready to start recording.
  • the real-time visualization application 128 can include a LiDAR map widget which includes the 3D point cloud visualization that displays the point cloud and enables the visualization to be moved (e.g., reoriented, moved to a new location in the target environment, etc. ) according to the user’s inputs using gestures through the movement of the movable object or the payload and/or the movement of the payload’s adapter apparatus.
  • the LiDAR map widget includes a communication layer to the payload and includes data de-serializers for the point cloud data received from the payload.
  • the real-time visualization application 128 can include a Perspective modes widget which enables the user to quickly change the perspective of the point cloud to view the front, side or top views of the model being generated in real time.
  • the real-time visualization application 128 can include a Throughput widget which gives the information about the point cloud like number of points, size of model etc.
  • the real-time visualization application 128 can include a Lidar map scale widget which provides the scale of the point cloud as the user zooms in/out.
  • the real-time visualization application 128 can include a Point cloud playback widget which is used to playback a point cloud file that was previously recorded.
  • FIG. 4 illustrates an example of an adapter apparatus in a movable object environment, in accordance with various embodiments.
  • an adapter apparatus 122 enables a payload 124 to be connected to a movable object 104.
  • the adapter apparatus 122 is a Payload Software Development Kit (SDK) adapter plate, an adapter ring and the like.
  • SDK Payload Software Development Kit
  • the payload 124 can be connected to the adapter apparatus 122, and the adapter apparatus can be coupled with the fuselage of the movable object 104.
  • adapter apparatus may include a quick release connector to which the payload can be attached/detached.
  • the payload 124 When the payload 124 is connected to the movable object 104 through the adapter apparatus 122, the payload 124 can also be controlled by a mobile device 110 via a remote control 111. As shown in FIG. 4, the remote control 111 and/or real-time visualization application 128 can send a control instruction through a command channel between the remote control and the communication system of the movable object 104. The control instruction can be transmitted to control the movable object 104 and/or the payload 124. For example, the control instruction may be used for controlling the attitude of the payload, to selectively view live data being collected by the payload (e.g., real-time low density mapping data, image data, etc. ) on the mobile device, etc.
  • live data being collected by the payload (e.g., real-time low density mapping data, image data, etc. ) on the mobile device, etc.
  • the control instruction is sent to the adapter apparatus 122
  • the communication protocol between the communication system and the adapter apparatus of the movable object is may be referred to as an internal protocol
  • the communication protocol between the adapter apparatus and the payload 124 may be referred to as an external protocol.
  • an internal protocol between the communication system of the movable object 104 and the adapter apparatus 122 is recorded as a first communication protocol
  • an external protocol between the adapter apparatus 122 and the payload 124 is recorded as a second communication protocol.
  • a first communication protocol is adopted to send the control instruction to the adapter apparatus through a command channel between the communication system and the adapter apparatus.
  • the internal protocol between the communication system of the movable object and the adapter apparatus is converted into an external protocol between the adapter apparatus and the payload 124.
  • the internal protocol can be converted into the external protocol by the adapter apparatus by adding a header conforming to the external protocol in the outer layer of the internal protocol message, so that the internal protocol message is converted into an external protocol message.
  • the communication interface between the adapter apparatus and the payload 124 may include a Controller Area Network (CAN) interface or a Universal Asynchronous Receiver/Transmitter (UART) interface.
  • CAN Controller Area Network
  • UART Universal Asynchronous Receiver/Transmitter
  • the payload 124 can collect sensor data from a plurality of sensors incorporated into the payload, such as a LiDAR sensor, one or more cameras, an INS, etc.
  • the payload 124 can send sensor data to the adapter apparatus through a network port between the payload 124 and the adapter apparatus.
  • the payload 124 may also send sensor data through a CAN interface or a UART interface between the payload 124 and the adapter apparatus.
  • the payload 124 sends the sensor data to the adapter apparatus through the network port, the CAN interface or the UART interface using a second communication protocol, e.g., the external protocol.
  • the adapter apparatus After the adapter apparatus receives the sensor data from the payload 124, the adapter apparatus converts the external protocol between the adapter apparatus and the payload 124 into an internal protocol between the communication system of the movable object 104 and the adapter apparatus.
  • the adapter apparatus uses an internal protocol to send sensor data to a communication system of the movable object through a data channel between the adapter apparatus and the movable object. Further, the communication system sends the sensor data to the remote control 111 through the data channel between the movable object and the remote control 111, and the remote control 111 forwards the sensor data to the mobile device 110.
  • the sensor data can be encrypted to obtain encrypted data. Further, the adapter apparatus uses the internal protocol to send the encrypted data to the communication system of the movable object through the data channel between the adapter apparatus and the movable object, the communication system sends the encrypted data to the remote control 111 through the data channel between the movable object and the remote control 111, and the remote control 111 forwards the encrypted data to the mobile device 110.
  • the payload 124 can be mounted on the movable object through the adapter apparatus.
  • the adapter apparatus receives the control instruction for controlling the payload 124 sent by the movable object
  • the internal protocol between the movable object and the adapter apparatus is converted into an external protocol between the adapter apparatus and the payload 124, and the control instruction is sent to the payload 124 by adopting an external protocol, so that the third-party device produced by the third-party manufacturer can communicate with the movable object normally through the external protocol, so that the movable object can support the third-party device, and the application range of the movable object is improved.
  • the adapter apparatus sends a handshake instruction to the payload 124, and the handshake instruction is used for detecting whether the adapter apparatus and the payload 124 are in normal communication connection or not.
  • the adapter apparatus can also send a handshake instruction to the payload 124 periodically or at arbitrary times. If the payload 124 does not answer, or the response message of the payload 124 is wrong, the adapter apparatus can disconnect the communication connection with the payload 124, or the adapter apparatus can limit the functions available to the payload.
  • the adapter apparatus may also comprise a power interface, and the power interface is used for supplying power to the payload 124.
  • the movable object can supply power to the adapter apparatus
  • the adapter apparatus can further supply power to the payload 124
  • the adapter apparatus may include a power interface through which the adapter apparatus supplies power to the payload 124.
  • the communication interface between the movable object and the adapter apparatus may include a Universal Serial Bus (USB) interface.
  • USB Universal Serial Bus
  • the data channel between the communication system of the movable object and the adapter apparatus can be implemented using a USB interface.
  • the adapter apparatus can convert the USB interface into a network port, such as an Ethernet port.
  • the payload 124 can carry out data transmission with the adapter apparatus through the network port, so that the payload 124 can conveniently use the transmission control protocol to communicate with the adapter apparatus for network communication without requiring a USB driver.
  • the interface externally output by the movable object comprises a CAN port, a USB port and a 12V 4A power supply port.
  • the CAN interface, the USB port and the 12V 4A power port are respectively connected with the adapter apparatus, the CAN port, the USB port and the 12V 4A power port are subjected to protocol conversion by the adapter apparatus, and a pair of external interfaces can be generated.
  • FIG. 5 illustrates an example 500 of generating a real-time visualization of mapping data by a mobile device, in accordance with various embodiments.
  • visualization manager 304 can render a real-time visualization of mapping data that does not require the mapping data to first be post-processed into a file.
  • the visualization manager 304 can be implemented using a graphics application programming interface (API) , such as OpenGL 2.0, or other graphics APIs, libraries, etc.
  • API application programming interface
  • a visualization request is received (e.g., from user interface manager 228, as described above)
  • the request can be received by a map view manager 502 which coordinates the various modules of the visualization manager 304 to generate the real-time visualization.
  • mapping data when mapping data is received, the map view manager 502 can instruct storage manager 504 to store the mapping data to memory 506.
  • the mapping data can require a significant amount of memory for storage which may not be available as a contiguous block of memory.
  • dynamic indexing system 508 can allocate one or more dynamic circular (or ring) buffers 510 that comprise a plurality of non-contiguous blocks of memory.
  • the dynamic indexing system 504 maintains an index of cursors to the blocks where the data is stored.
  • the storage manager 504 can store some data into an overview buffer 512. For example, points of the mapping data having an intensity value greater than a threshold may be stored to the overview buffer 512.
  • point data having an intensity value less than or equal to the threshold value may be randomly sampled and added to the overview buffer 512, such that the points with greater intensity values that are selected and stored first may be smoothed when rendering on a display of the mobile device.
  • the overview buffer 512 can be of a fixed memory size. During rendering, the overview buffer 512 can be rendered first.
  • the new data can overwrite at least a portion of the old data in the overview buffer. For example, the oldest points may be overwritten by the newest points. Alternatively, a random sampling of older points may be overwritten by the newer points.
  • the new data is associated with an intensity value smaller or equal to the intensity threshold then this data is written to the dynamic buffers. If the dynamic buffers are full, then old points in the dynamic buffers are overwritten. As discussed with respect to the overview buffers, the data that is overwritten may correspond to a random sampling of data stored in the dynamic buffers or may correspond to the oldest data stored in the dynamic buffers.
  • the remaining points of the mapping data can be stored to the dynamic buffers 510.
  • the point data can include point data (e.g., x, y, z point data) and color data (e.g., RGB values) corresponding to the points.
  • the color data may correspond to colors extracted from image data captured by a visible light camera of the payload, intensity data, height data, or other data being represented by color values.
  • the points can be stored in point buffers 514 and the color data can be stored in color buffers 516.
  • the storage manager 504 can additionally store the point data in a point cloud data file 518 on disk storage 520 of mobile device 110.
  • the point cloud data file 518 can also be used for playback of the visualization at a different point in time (e.g., rather than in real-time) .
  • Map view manager 502 can additionally manage construction of the real-time visualization by invoking point cloud view manager 522.
  • Point cloud view manager 522 can invoke current view manager 524 which causes the visualization to be displayed based on view settings and current camera position.
  • the current view manager 524 can retrieve the point data from memory (e.g., the point data stored in overview buffer 512 and the dynamic buffers 510, including the point data from point buffer 514 and corresponding color data from color buffer 516) and issue draw calls using the retrieved point data.
  • the current view manager 524 can access the memory 506 via point cloud painter 526.
  • point cloud painter 526 can send access to memory 506 (either directly or via storage manager 504, a shown in FIG. 5) and retrieve the appropriate portions of memory including the point data for the current view. In some embodiments, this may include one or more buffer blocks corresponding to the most recently stored point data.
  • Camera Controller 530 can calculate model, view and projection (MVP) matrices to enable movement of the point cloud based on input received from the user.
  • the user input may include gesture controls.
  • the gesture controls can be identified by UI controller 528.
  • UI controller 528 can include listeners for gesture events related to navigating the point cloud visualization, such as translation with one finger (up/down/left/right) ; rotation about origin with two fingers (up/down/left/right) ; screen tilt with two fingers rotating about the center of the screen; zoom in/out with two-finger pinch; reset view when double-tapping the screen, etc.
  • the MVP matrices can be used to update the visualization.
  • the visualization can be generated very quickly or substantially in real-time.
  • the point data can be provided in batches to visualization manager 304.
  • the visualization can be generated and/or updated to reflect the newly received data in a scale of milliseconds, such as within 100 milliseconds, from the time at which the mapping data is acquired by the scanning sensor of the payload, transmitted by the payload, received by the mobile device, or stored in the memory of the mobile device.
  • the time needed to update the visualization from the time at which the new data has been acquired by the scanning sensor of the payload, transmitted by the payload, received by the mobile device, or stored in memory on the mobile device can vary between 10 milliseconds and 1 second.
  • FIGS. 6A and 6B illustrate data processing based on a circular buffer in a data processing system, in accordance with various embodiments.
  • dynamic buffer (s) 510 in FIG. 5 can be implemented as circular buffers 610 with dynamic size.
  • overview buffer (s) 512 in FIG. 5 can be implemented as circular buffers 610 with fixed size.
  • an upstream module e.g. a data processor A 601
  • a downstream module e.g. a data processor 602
  • the upstream data processor A 601 and the downstream data processor B 602 may be the same processor or different processors on the mobile device.
  • the circular buffer 610 which may comprise a plurality of buffer blocks is advantageous to buffering data streams, e.g. data blocks of point cloud data, due to its circular topological data structure.
  • the plurality of buffer blocks may be noncontiguous memory blocks which are indexed using cursors by a dynamic indexing system such that they are treated as a single buffer for optimizing memory management and reducing processing time to write/read mapping data in the memory.
  • a circular buffer management mechanism can be used for maintaining the circular buffer 610.
  • the data processor A 601 can write 621 a point cloud data in blocks into a buffer block 611, which may be referred to as a write block (WR) .
  • the data processor B 602 can read 622 a data block out from a buffer block 612, which may be referred to as a read block (RD) .
  • the circular buffer 610 may comprise one or more ready blocks (RYs) stored in one or more buffer blocks.
  • a ready block 613 is written by an upstream module, e.g. the data processor A 601, in a buffer block and has not yet been processed by the downstream module, e.g. the data processor B 602. There can be multiple ready blocks in the circular buffer 610, when the data processor B 602 is lagging behind the data processor A 601 in processing data in the circular buffer 610.
  • Figure 6B illustrates data processing with low latency based on a circular buffer in a data processing system, in accordance with various embodiments.
  • the buffer block 614 in the circular buffer 610 includes a data block, which acts as both the write block for the data processor A 601 and the read block for the data processor B 602. Both the data processor A 601 and the data processor B 602 may be accessing on the same buffer block 614 simultaneously.
  • the data processor A 601 may be writing 623 data of a data block in the buffer block 614 while the data processor B 602 is reading 624 data out from the buffer block 614.
  • the data processor A 601 can provide the fine granular control information 620 to the data processor B 602, so that the data processor B 602 can keep up with the progress of the data processor A 601. As a result, there may be no ready block in the circular buffer 610 (i.e. the number of ready blocks in the circular buffer 610 is zero) .
  • storage manager 504 can write point data to the dynamic buffers sequentially to empty blocks.
  • the dynamic indexing system 508 is updated to track where in memory the data has been stored. If the buffer blocks of the circular buffer are filled, then old data is overwritten by new data. In some embodiments, the oldest points are overwritten by the newest points. However, in some embodiments, old points are randomly selected to be overwritten by new points.
  • the dynamic index system can be used to identify blocks associated with the oldest point data and then a portion of those blocks are selected randomly and overwritten with new data. This preserves some of the oldest points without dropping new points.
  • FIG. 7 illustrates an example of a split screen user interface 700, in accordance with various embodiments.
  • the real-time visualization application 128 can include a camera view 702 and a point cloud view 704.
  • the camera view can be captured by a visible light camera of the payload and the point cloud view can be a visualization generated based on point cloud data collected by a LiDAR sensor of the payload.
  • These views can be synchronized such that the views are of approximately the same field of view at the same points in time.
  • this split screen view does not allow for independent control of the point cloud representation (e.g., via gesture based inputs) .
  • the user can select a different view via icons 706 and 708, which allow for a selection of a view only of the visible light camera or only the point cloud data, respectively.
  • the user interface can include a recording widget 712 which can be selected to enable/disable recording.
  • the user interface can include a photo-video switching widget which can be selected to switch the imaging device between photo shooting and video recording.
  • an FPV view 714 can show a first person view from the perspective of the movable object via a visible light camera integrated into the movable object.
  • the user interface in addition to the recording widget or the phot-video switching widget, can enable playback of previously stored visualizations, enable start/stop storing the mapping data, change settings of the LiDAR, such as sampling rate, scan mode, echo mode, etc.
  • FIG. 8 illustrates an example of user interface 800, in accordance with various embodiments.
  • the user can select the point cloud view icon 708 which causes the user interface to display only the point cloud view visualization.
  • the user can interact with the visualization using gestures.
  • the user can select a point 802 (e.g., by tapping on a touchscreen interface) and dragging their finger to point 804 to cause a translation of the point cloud visualization.
  • Other gestures may include rotation about an origin with two fingers (up/down/left/right) ; screen tilt with two fingers rotating about the center of the screen; zoom in/out with two-finger pinch; reset view when double-tapping the screen, etc.
  • the point cloud visualization has been overlaid with color information representing the LiDAR intensity associated with the points (e.g., via selection of icon 806) .
  • the user may also select different overlays, such as height 808 or visible light 810.
  • FIG. 9 illustrates an example 900 of overlaying data values in mapping data, in accordance with various embodiments.
  • overlay information 902 can be obtained from the RGB camera or other sensors incorporated into the payload.
  • the overlay data can include color data which may include pixel values of various color schemes (e.g., 16-bit, 32-bit, etc. ) .
  • the color data can be extracted from one or more images captured by the RGB camera at the same time as the point cloud data was captured by the scanning sensor and these color values can be overlaid on the visualization of the point cloud data.
  • the color data may include various color values depending on the color values of the image data captured by the RGB camera.
  • the overlay data can include height above a reference plane.
  • a color value may be assigned to points depending on their height above the reference plane.
  • the height values may be relative height values, relative to the reference plane, or absolute height values (e.g., relative to sea level) .
  • the reference plane may correspond to the ground, floor, or an arbitrary plane selected by the user. The values may vary monochromatically as the height changes or may change colors as the height changes.
  • the overlay data can represent intensity values.
  • the intensity values may correspond to a return strength of the laser beam as received by the LiDAR sensor.
  • the intensity values may indicate material composition or characteristics of objects in the target environment. For example, based on the reflectivity of the material, characteristics of the material can be inferred (e.g., type of material, such as metal, wood, concrete, etc. ) and the overlay information may indicate these characteristics through different color values assigned to the different characteristics.
  • the point cloud data can be overlaid on a map of the target area being scanned. For example, the point cloud data can be overlaid on a two dimensional or three dimensional map provided by a mapping service.
  • FIG. 10 illustrates an example of supporting a movable object interface in a software development environment, in accordance with various embodiments.
  • a movable object interface 1003 can be used for providing access to a movable object 1001 in a software development environment 1000, such as a software development kit (SDK) environment.
  • SDK software development kit
  • the SDK can be an onboard SDK implemented on an onboard environment that is coupled to the movable object 1001.
  • the SDK can also be a mobile SDK implemented on an off-board environment that is coupled to a mobile device or a mobile device.
  • the movable object 1001 can include various functional modules A-C 1011-1013, and the movable object interface 1003 can include different interfacing components A-C 1031-1033.
  • Each said interfacing component A-C 1031-1033 in the movable object interface 1003 corresponds to a module A-C 1011-1013 in the movable object 1001.
  • the interfacing components may be rendered on a user interface of a display of a mobile device or other computing device in communication with the movable object.
  • the interfacing components, as rendered may include selectable command buttons for receiving user input/instructions to control corresponding functional modules of the movable object.
  • the movable object interface 1003 can provide one or more callback functions for supporting a distributed computing model between the application and movable object 1001.
  • the callback functions can be used by an application for confirming whether the movable object 1001 has received the commands. Also, the callback functions can be used by an application for receiving the execution results. Thus, the application and the movable object 1001 can interact even though they are separated in space and in logic.
  • the interfacing components A-C 1031-1033 can be associated with the listeners A-C 1041-1043.
  • a listener A-C 1041-1043 can inform an interfacing component A-C 1031-1033 to use a corresponding callback function to receive information from the related module (s) .
  • a data manager 1002 which prepares data 1020 for the movable object interface 1003, can decouple and package the related functionalities of the movable object 1001.
  • the data manager 1002 may be onboard, that is coupled to or located on the movable object 1001, which prepares the data 1020 to be communicated to the movable object interface 1003 via communication between the movable object 1001 and a mobile device or a mobile device.
  • the data manager 1002 may be off board, that is coupled to or located on a mobile device, which prepares data 1020 for the movable object interface 1003 via communication within the mobile device.
  • the data manager 1002 can be used for managing the data exchange between the applications and the movable object 1001. Thus, the application developer does not need to be involved in the complex data exchanging process.
  • the onboard or mobile SDK can provide a series of callback functions for communicating instant messages and for receiving the execution results from a movable object.
  • the onboard or mobile SDK can configure the life cycle for the callback functions in order to make sure that the information interchange is stable and completed.
  • the onboard or mobile SDK can establish connection between a movable object and an application on a smart phone (e.g. using an Android system or an iOS system) .
  • the callback functions such as the ones receiving information from the movable object, can take advantage of the patterns in the smart phone system and update the statements accordingly to the different stages in the life cycle of the smart phone system.
  • FIG. 11 illustrates an example of a movable object interface, in accordance with various embodiments.
  • a movable object interface 1103 can be rendered on a display of a mobile device or other computing devices representing statuses of different components of a movable object 1101.
  • the applications e.g., APPs 1104-1106
  • these apps may include an inspection app 1104, a viewing app 1105, and a calibration app 1106.
  • the movable object 1101 can include various modules, such as a camera 1111, a battery 1112, a gimbal 1113, and a flight controller 1114.
  • modules such as a camera 1111, a battery 1112, a gimbal 1113, and a flight controller 1114.
  • the movable object interface 1103 can include a camera component 1121, a battery component 1122, a gimbal component 1123, and a flight controller component 1124 to be rendered on a computing device or other computing devices to receive user input/instructions by way of using the APPs 1104-1106.
  • the movable object interface 1103 can include a ground station component 1126, which is associated with the flight controller component 1124.
  • the ground station component operates to perform one or more flight control operations, which may require a high-level privilege.
  • FIG. 12 illustrates an example of components for a movable object in a software development kit (SDK) , in accordance with various embodiments.
  • the drone class 1201 in the SDK 1200 is an aggregation of other components 1202-1207 for a movable object (e.g., a drone) .
  • the drone class 1201 which have access to the other components 1202-1207, can exchange information with the other components 1202-1207 and controls the other components 1202-1207.
  • an application may be accessible to only one instance of the drone class 1201.
  • multiple instances of the drone class 1201 can present in an application.
  • an application can connect to the instance of the drone class 1201 in order to upload the controlling commands to the movable object.
  • the SDK may include a function for establishing the connection to the movable object.
  • the SDK can disconnect the connection to the movable object using an end connection function.
  • the developer can have access to the other classes (e.g. the camera class 1202, the battery class 1203, the gimbal class 1204, and the flight controller class 1205) .
  • the drone class 1201 can be used for invoking the specific functions, e.g. providing access data which can be used by the flight controller to control the behavior, and/or limit the movement, of the movable object.
  • an application can use a battery class 1203 for controlling the power source of a movable object. Also, the application can use the battery class 1203 for planning and testing the schedule for various flight tasks. As battery is one of the most restricted elements in a movable object, the application may seriously consider the status of battery not only for the safety of the movable object but also for making sure that the movable object can finish the designated tasks.
  • the battery class 1203 can be configured such that if the battery level is low, the movable object can terminate the tasks and go home outright. For example, if the movable object is determined to have a battery level that is below a threshold level, the battery class may cause the movable object to enter a power savings mode.
  • the battery class may shut off, or reduce, power available to various components that are not integral to safely returning the movable object to its home. For example, cameras that are not used for navigation and other accessories may lose power, to increase the amount of power available to the flight controller, motors, navigation system, and any other systems needed to return the movable object home, make a safe landing, etc.
  • the application can obtain the current status and information of the battery by invoking a function to request information from in the Drone Battery Class.
  • the SDK can include a function for controlling the frequency of such feedback.
  • an application can use a camera class 1202 for defining various operations on the camera in a movable object, such as an unmanned aircraft.
  • the Camera Class includes functions for receiving media data in SD card, getting &setting photo parameters, taking photo and recording videos.
  • An application can use the camera class 1202 for modifying the setting of photos and records.
  • the SDK may include a function that enables the developer to adjust the size of photos taken.
  • an application can use a media class for maintaining the photos and records.
  • an application can use a gimbal class 1204 for controlling the view of the movable object.
  • the Gimbal Class can be used for configuring an actual view, e.g. setting a first personal view of the movable object.
  • the Gimbal Class can be used for automatically stabilizing the gimbal, in order to be focused on one direction.
  • the application can use the Gimbal Class to change the angle of view for detecting different objects.
  • an application can use a flight controller class 1205 for providing various flight control information and status about the movable object.
  • the flight controller class can include functions for receiving and/or requesting access data to be used to control the movement of the movable object across various regions in a movable object environment.
  • an application can monitor the flight status, e.g. using instant messages.
  • the callback function in the Flight Controller Class can send back the instant message every one thousand milliseconds (1000 ms) .
  • the Flight Controller Class allows a user of the application to investigate the instant message received from the movable object. For example, the pilots can analyze the data for each flight in order to further improve their flying skills.
  • an application can use a ground station class 1207 to perform a series of operations for controlling the movable object.
  • the SDK may require applications to have an SDK-LEVEL-2 key for using the Ground Station Class.
  • the Ground Station Class can provide one-key-fly, on-key-go-home, manually controlling the drone by app (i.e. joystick mode) , setting up a cruise and/or waypoints, and various other task scheduling functionalities.
  • an application can use a communication component for establishing the network connection between the application and the movable object.
  • FIG. 13 shows a flowchart of a method 1300 of generating a real-time visualization of mapping data on a mobile device in a movable object environment, in accordance with various embodiments.
  • the method can include obtaining mapping data from a scanning sensor.
  • the scanning sensor includes a light detection and ranging (LiDAR) sensor.
  • the mapping data includes point cloud data collected by the LiDAR sensor including a plurality of points and corresponding color data.
  • the movable object is an unmanned aerial vehicle (UAV) .
  • UAV unmanned aerial vehicle
  • the method can include storing the mapping data in one or more dynamic buffers in a memory of a mobile device, the dynamic buffers comprising non-contiguous portions of the memory.
  • the dynamic buffers include color buffers to store the corresponding color data and point buffers to store the plurality of points.
  • the plurality of points can be georeferenced using an inertial navigation system.
  • the real-time visualization is a colorized point cloud based on the color data and the point cloud data.
  • the color data can be obtained from at least one visible light camera from the one or more cameras.
  • the color data represents intensity variation or height variation.
  • the method can include generating a real-time visualization of the mapping data stored in the dynamic buffers.
  • the dynamic buffers are circular buffers comprising a plurality of buffer blocks, wherein the mapping data is written sequentially to empty buffer blocks indexed by a dynamic indexing system. When the dynamic buffers are filled, new point data overwrites a random sampling of old point data in the dynamic buffers.
  • the real-time visualization is generated within 100 milliseconds of storage of the mapping data in the one or more dynamic buffers.
  • the method can include rendering the real-time visualization of the mapping data on a display coupled to the mobile device.
  • rendering the real-time visualization further includes rendering a synchronized side-by-side view of image data obtained from at least one of the one or more cameras and the real-time visualization of the mapping data.
  • the method can further include storing points from the mapping data associated with intensity values that exceed a threshold in an overview buffer, wherein the overview buffer is of fixed size and rendering the points stored in the overview buffer before rending the mapping data stored in the dynamic buffers.
  • the method can further include receiving a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data, converting the gesture-based input into one or more movement commands for at least one of the payload, the movable object, or an adapter apparatus coupling the payload to the movable object, and sending the one or more movement commands to the payload, the movable object, or the adapter apparatus.
  • the method can further include rendering an updated real-time visualization of updated mapping data on the display coupled to the mobile device, the updated mapping data received during execution of the one or more movement commands by the payload or the movable object.
  • the method can further include receiving an input via the display coupled to the mobile device, the input to switch view modes between a camera and point cloud side-by-side view, a point cloud view, or a camera view.
  • the method can further include receiving a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data, and rendering a manipulated view of the mapping data, the manipulated view including one of a translation, rotation, tilt, or zoom relative to the current view of the mapping data.
  • the method can further include storing a copy of the mapping data to disk as a file on the mobile device.
  • the copy of the mapping data is used to recover lost mapping data in memory or to generate a playback visualization of the mapping data.
  • processors can include, without limitation, one or more general purpose microprocessors (for example, single or multi-core processors) , application-specific integrated circuits, application-specific instruction-set processors, graphics processing units, physics processing units, digital signal processing units, coprocessors, network processing units, audio processing units, encryption processing units, and the like.
  • the storage medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs) , or any type of media or device suitable for storing instructions and/or data.
  • features can be incorporated in software and/or firmware for controlling the hardware of a processing system, and for enabling a processing system to interact with other mechanism utilizing the results.
  • software or firmware may include, but is not limited to, application code, device drivers, operating systems and execution environments/containers.
  • ASICs application specific integrated circuits
  • FPGA field-programmable gate array
  • the present invention may be conveniently implemented using one or more conventional general purpose or specialized digital computer, computing device, machine, or microprocessor, including one or more processors, memory and/or computer readable storage media programmed according to the teachings of the present disclosure.
  • Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
  • a system comprising:
  • a payload coupled to the movable object, the payload comprising a scanning sensor, one or more cameras, and an inertial navigation system (INS) ; and
  • INS inertial navigation system
  • the mobile device in communication with the payload, the mobile device including at least one processor and a memory, the memory including instructions which, when executed by the at least one processor, cause the mobile device to:
  • mapping data in one or more dynamic buffers in the memory of the mobile device, the dynamic buffers comprising non-contiguous portions of the memory;
  • mapping data includes point cloud data of a plurality of points collected by the LiDAR sensor and corresponding color data.
  • the input receive an input via the display coupled to the mobile device, the input to switch view modes between a camera and point cloud side-by-side view, a point cloud view, or a camera view.
  • mapping data renders a manipulated view of the mapping data based on the gesture-based input, the manipulated view including one of a translation, rotation, tilt, or zoom relative to the current view of the mapping data.
  • mapping data store a copy of the mapping data to disk as a file on the mobile device.
  • a method comprising:
  • mapping data in one or more dynamic buffers in a memory of a mobile device, the dynamic buffers comprising non-contiguous portions of the memory;
  • mapping data rendering the real-time visualization of the mapping data on a display coupled to the mobile device.
  • mapping data includes point cloud data of a plurality of points collected by the LiDAR sensor and corresponding color data.
  • rendering the real-time visualization of the mapping data on a display coupled to the mobile device further comprises:
  • mapping data rendered on the display coupled to the mobile device, the updated mapping data received during execution of the one or more movement commands by the payload or the movable object.
  • the input to switch view modes between a camera and point cloud side-by-side view, a point cloud view, or a camera view.
  • mapping data rendering a manipulated view of the mapping data based on the gesture-based input, the manipulated view including one of a translation, rotation, tilt, or zoom relative to the current view of the mapping data.
  • mapping data storing a copy of the mapping data to disk as a file on the mobile device.
  • a non-transitory computer readable storage medium including instructions stored thereon which, when executed by at least one processor, cause the at least one processor to:
  • mapping data in one or more dynamic buffers in a memory of a mobile device, the dynamic buffers comprising non-contiguous portions of the memory;
  • mapping data includes point cloud data of a plurality of points collected by the LiDAR sensor and corresponding color data.
  • the input receive an input via the display coupled to the mobile device, the input to switch view modes between a camera and point cloud side-by-side view, a point cloud view, or a camera view.
  • mapping data renders a manipulated view of the mapping data based on the gesture-based input, the manipulated view including one of a translation, rotation, tilt, or zoom relative to the current view of the mapping data.
  • mapping data storing a copy of the mapping data to disk as a file on the mobile device.
  • a mobile device comprising:
  • the memory including instructions which, when executed by the at least one processor, cause the mobile device to:
  • mapping data in one or more dynamic buffers in the memory, the dynamic buffers comprising non-contiguous portions of the memory;
  • disjunctive language such as the phrase “at least one of A, B, or C, ” is intended to be understood to mean either A, B, or C, or any combination thereof (e.g., A, B, and/or C) .
  • disjunctive language is not intended to, nor should it be understood to, imply that a given embodiment requires at least one of A, at least one of B, or at least one of C to each be present.

Abstract

Techniques are disclosed for real-time visualization of mapping data on a mobile device in a movable object environment. A method of real-time visualization of mapping data may include obtaining mapping data from a scanning sensor, storing the mapping data in one or more dynamic buffers in a memory of a mobile device, the dynamic buffers comprising non-contiguous portions of the memory, generating a real-time visualization of the mapping data stored in the dynamic buffers, and rendering the real-time visualization of the mapping data on a display coupled to the mobile device.

Description

LARGE SCOPE POINT CLOUD DATA GENERATION AND OPTIMIZATION
COPYRIGHT NOTICE
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
FIELD OF THE INVENTION
The disclosed embodiments relate generally to techniques for mapping and more particularly, but not exclusively, to techniques for real-time visualization of mapping data on a mobile device in a movable object environment.
BACKGROUND
Movable objects such as unmanned aerial vehicles (UAVs) can be used for performing surveillance, reconnaissance, and exploration tasks for various applications. Movable objects may carry a payload, including various sensors, which enables the movable object to capture sensor data during movement of the movable objects. The captured sensor data may be rendered on a mobile device, such as a mobile device in communication with the movable object via a remote control, remote server, or other computing device.
SUMMARY
Techniques are disclosed for real-time visualization of mapping data on a mobile device in a movable object environment. A method of real-time visualization of mapping data may include obtaining mapping data from a scanning sensor, storing the mapping data in one or more dynamic buffers in a memory of a mobile device, the dynamic buffers comprising non-contiguous portions of the memory, generating a real-time visualization of the mapping data stored in the dynamic buffers, and rendering the real-time visualization of the mapping data on a display coupled to the mobile device..
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 illustrates an example of a movable object in a movable object environment, in accordance with various embodiments.
FIG. 2 illustrates an example of a movable object architecture in a movable object environment, in accordance with various embodiments.
FIG. 3 illustrates an example of mobile device and payload architectures, in accordance with various embodiments.
FIG. 4 illustrates an example of an adapter apparatus in a movable object environment, in accordance with various embodiments.
FIG. 5 illustrates an example of generating a real-time visualization of mapping data by a mobile device, in accordance with various embodiments.
FIGS. 6A and 6B illustrate data processing based on a circular buffer in a data processing system, in accordance with various embodiments.
FIG. 7 illustrates an example of a split screen user interface, in accordance with various embodiments.
FIG. 8 illustrates an example of user interface, in accordance with various embodiments.
FIG. 9 illustrates an example of overlaying information in mapping data, in accordance with various embodiments.
FIG. 10 illustrates an example of supporting a movable object interface in a software development environment, in accordance with various embodiments.
FIG. 11 illustrates an example of a movable object interface, in accordance with various embodiments.
FIG. 12 illustrates an example of components for a movable object in a software development kit (SDK) , in accordance with various embodiments.
FIG. 13 shows a flowchart of a method of mapping using a payload in a movable object environment, in accordance with various embodiments.
DETAILED DESCRIPTION
The invention is illustrated, by way of example and not by way of limitation, in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an” or “one” or “some” embodiment (s) in this disclosure are not necessarily to the same embodiment, and such references mean at least one.
The following description of the invention describes target mapping using a movable object. For simplicity of explanation, an unmanned aerial vehicle (UAV) is generally used as an example of a movable object. It will be apparent to those skilled in the art that other types of movable objects can be used without limitation.
Light detection and ranging (LiDAR) sensors can be used to generate very accurate maps of a target environment. However, LiDAR sensors produce a significant amount of data that is generally not readily viewed by a person right out of the box. Instead, significant configuration of the LiDAR sensor, along with additional sensors such as positioning sensors, along with  significant post-processing of the collected data is needed to yield a map that can be usefully interpreted and/or used by a human for various applications. For example, a LiDAR sensor may collect mapping data relative to the LiDAR sensor, and requires a highly accurate inertial navigation system to generate mapping data that can be transformed into a useful coordinate system (e.g., such as a global coordinate system) . As such, to obtain useful mapping data, the complexity of the system and the complexity of the post-processing increases quite rapidly along with the cost of all of the components that are needed.
Additionally, because the mapping data is not readily human-interpretable in conventional systems the way traditional image data is, the human operator of a mapping system cannot readily identify any areas of the target environment that have not been mapped, or have been incompletely mapped. Instead, the operator of these conventional systems must wait for the data to be post-processed which may take hours or days. Then upon discovering that the mapping data is incomplete, the operator must perform additional mapping missions in the target environment to attempt to complete collection of the mapping data. This process may potentially iterate several times depending on the skill of the operator before the mapping is complete.
Embodiments enable a movable object to map a target environment using a payload that comprises a plurality of sensors. For example, the payload can include a scanning sensor, one or more cameras, and an inertial navigation system (INS) . This payload can be connected to a UAV through a single port which provides a mechanical mounting point as well as managing power and data communication with the payload. The mapping data collected by the payload can be provided to a mobile device, such as a smartphone, tablet, or other mobile device and visualized in real-time. For example, the visualization may be generated and displayed to the operator within a scale of milliseconds (e.g., within 100 milliseconds, or between 10 milliseconds to 1 second) of the mapping data being received and stored by the mobile device. The mobile device can include a real-time visualization application which manages the memory and computing resources of the mobile device to facilitate the real-time visualization generation.
In some embodiments, the real-time visualization application, provides both a visualization interface and a control interface through which the operator can control the payload and/or the UAV. For example, when the real-time visualization application displays the current mapping data, the operator can interact with the visualization using the mobile device, such as through a plurality of gesture-based inputs. In some embodiments, the real-time visualization application can simultaneously show a camera view synchronized to a point cloud view. For example, video data from a visible light camera of the payload and a real-time point cloud visualization from a LiDAR sensor of the payload can be displayed simultaneously, providing the operator with a more complete view of the target environment. In some embodiments, the operator can choose  to view the synchronized view, or a single view of either the real time point cloud visualization or a real time camera view. In some embodiments, the user may interact with the real-time visualization via a user interface, such as through the plurality of gesture-based inputs. In such embodiments, instructions or commands may be sent to the UAV or the payload, such that the poses of the UAV or the payload may be adjusted to change the point cloud view and/or the camera view in response to the user gesture-based inputs in real-time.
FIG. 1 illustrates an example of a movable object in a movable object environment 100, in accordance with various embodiments. As shown in FIG. 1, mobile device 110 in a movable object environment 100 can communicate with a movable object 104 via a communication link 106. The movable object 104 can be an unmanned aircraft, an unmanned vehicle, a handheld device, and/or a robot. The mobile device 110 can be a portable personal computing device, a smart phone, a remote control, a wearable computer, a virtual reality/augmented reality system, and/or a personal computer. Additionally, the mobile device 110 can include a remote control 111 and communication system 120A, which is responsible for handling the communication between the mobile device 110 and the movable object 104 via communication system 120B. For example, the communication between the mobile device 110 and the movable object 104 (e.g., an unmanned aerial vehicle (UAV) ) can include uplink and downlink communication. The uplink communication can be used for transmitting control signals or commands, the downlink communication can be used for transmitting media or video stream, mapping data collected by scanning sensors, or other sensor data collected by other sensors.
In accordance with various embodiments, the communication link 106 can be (part of) a network, which is based on various wireless technologies, such as the WiFi, Bluetooth, 3G/4G, and other radio frequency technologies. Furthermore, the communication link 106 can be based on other computer network technologies, such as the internet technology, or any other wired or wireless networking technology. In some embodiments, the communication link 106 may be a non-network technology, including direct point-to-point connections such as universal serial bus (USB) or universal asynchronous receiver-transmitter (UART) .
In various embodiments, movable object 104 in a movable object environment 100 can include an adapter apparatus 122 and a payload 124, such as a scanning sensor (e.g., a LiDAR sensor) , camera (s) , and/or a collection of sensors in a single payload unit. In various embodiments, the adapter apparatus 122 includes a port for coupling the payload 124 to the movable object 104 which provides power, data communications, and structural support for the payload 124. Although the movable object 104 is described generally as an aircraft, this is not intended to be limiting, and any suitable type of movable object can be used. One of skill in the art would appreciate that any of the embodiments described herein in the context of aircraft  systems can be applied to any suitable movable object (e.g., a UAV) . In some instances, the payload 124 may be provided on the movable object 104 without requiring the adapter apparatus 122.
In accordance with various embodiments, the movable object 104 may include one or more movement mechanisms 116 (e.g., propulsion mechanisms) , a sensing system 118, and a communication system 120B. The movement mechanisms 116 can include one or more of rotors, propellers, blades, engines, motors, wheels, axles, magnets, nozzles, animals, or human beings. For example, the movable object may have one or more propulsion mechanisms. The movement mechanisms may all be of the same type. Alternatively, the movement mechanisms can be different types of movement mechanisms. The movement mechanisms 116 can be mounted on the movable object 104 (or vice-versa) , using any suitable means such as a support element (e.g., a drive shaft) . The movement mechanisms 116 can be mounted on any suitable portion of the movable object 104, such on the top, bottom, front, back, sides, or suitable combinations thereof.
In some embodiments, the movement mechanisms 116 can enable the movable object 104 to take off vertically from a surface or land vertically on a surface without requiring any horizontal movement of the movable object 104 (e.g., without traveling down a runway) . Optionally, the movement mechanisms 116 can be operable to permit the movable object 104 to hover in the air at a specified position and/or orientation. One or more of the movement mechanisms 116 may be controlled independently of the other movement mechanisms, for example by a real-time visualization application 128 executing on mobile device 110 or other computing device in communication with the movement mechanisms. Alternatively, the movement mechanisms 116 can be configured to be controlled simultaneously. For example, the movable object 104 can have multiple horizontally oriented rotors that can provide lift and/or thrust to the movable object. The multiple horizontally oriented rotors can be actuated to provide vertical takeoff, vertical landing, and hovering capabilities to the movable object 104. In some embodiments, one or more of the horizontally oriented rotors may spin in a clockwise direction, while one or more of the horizontally oriented rotors may spin in a counterclockwise direction. For example, the number of clockwise rotors may be equal to the number of counterclockwise rotors. The rotation rate of each of the horizontally oriented rotors can be varied independently in order to control the lift and/or thrust produced by each rotor, and thereby adjust the spatial disposition, velocity, and/or acceleration of the movable object 104 (e.g., with respect to up to three degrees of translation and up to three degrees of rotation) . As discussed further herein, a controller, such as flight controller 114, can send movement commands to the movement mechanisms 116 to control the movement of movable object 104.  These movement commands may be based on and/or derived from instructions received from mobile device 110 or other entity.
The sensing system 118 can include one or more sensors that may sense the spatial disposition, velocity, and/or acceleration of the movable object 104 (e.g., with respect to various degrees of translation and various degrees of rotation) . The one or more sensors can include any of the sensors, including GPS sensors, real-time kinematic (RTK) sensors, motion sensors, inertial sensors, proximity sensors, or image sensors. The sensing data provided by the sensing system 118 can be used to control the spatial disposition, velocity, and/or orientation of the movable object 104 (e.g., using a suitable processing unit and/or control module) . Alternatively, the sensing system 118 can be used to provide data regarding the environment surrounding the movable object, such as weather conditions, proximity to potential obstacles, location of geographical features, location of manmade structures, and the like.
The communication system 120B enables communication with mobile device 110 via communication link 106, which may include various wired and/or wireless technologies as discussed above, and communication system 120A. The  communication system  120A or 120B may include any number of transmitters, receivers, and/or transceivers suitable for wireless communication. The communication may be one-way communication, such that data can be transmitted in only one direction. For example, one-way communication may involve only the movable object 104 transmitting data to the mobile device 110, or vice-versa. The data may be transmitted from one or more transmitters of the communication system 120B of the movable object 104 to one or more receivers of the communication system 120A of the mobile device 110, or vice-versa. Alternatively, the communication may be two-way communication, such that data can be transmitted in both directions between the movable object 104 and the mobile device 110. The two-way communication can involve transmitting data from one or more transmitters of the communication system 120B of the movable object 104 to one or more receivers of the communication system 120A of the mobile device 110, and transmitting data from one or more transmitters of the communication system 120A of the mobile device 110 to one or more receivers of the communication system 120B of the movable object 104.
In some embodiments, a real-time visualization application 128 executing on mobile device 110 or other computing devices that are in communication with the movable object 104 can provide control data to one or more of the movable object 104, adapter apparatus 122, and payload 124 and receive information from one or more of the movable object 104, adapter apparatus 122, and payload 124 (e.g., position and/or motion information of the movable object, adapter apparatus or payload; data sensed by the payload such as image data captured by one or more payload cameras or mapping data captured by a payload LiDAR sensor; and data  generated from image data captured by the payload camera or LiDAR data generated from mapping data captured by the payload LiDAR sensor) .
In some embodiments, the mobile device comprises a touchscreen display; at least one processor; and a memory, the memory including instructions which, when executed by the at least one processor, cause the mobile device to: obtain mapping data from a scanning sensor; store the mapping data in one or more dynamic buffers in the memory, the dynamic buffers comprising non-contiguous portions of the memory; generate a real-time visualization of the mapping data stored in the dynamic buffers; and render the real-time visualization of the mapping data on a display coupled to the mobile device.
In some embodiments, the control data may result in a modification of the location and/or orientation of the movable object (e.g., via control of the movement mechanisms 116) , or a movement of the payload with respect to the movable object (e.g., via control of the adapter apparatus 122) . The control data from the real-time visualization application 128 may result in control of the payload 124, such as control of the operation of a scanning sensor, a camera or other image capturing device (e.g., taking still or moving pictures, zooming in or out, turning on or off, switching imaging modes, changing image resolution, changing focus, changing depth of field, changing exposure time, changing viewing angle or field of view, adding or removing waypoints, etc. ) .
In some instances, the communications from the movable object, adapter apparatus and/or payload may include information obtained from one or more sensors (e.g., of the sensing system 118 or of the payload 124 or other payload) and/or data generated based on the sensing information. The communications may include sensed information obtained from one or more different types of sensors (e.g., GPS sensors, RTK sensors, motion sensors, inertial sensors, proximity sensors, or image sensors) . Such information may pertain to the position (e.g., location, orientation) , movement, or acceleration of the movable object, adapter apparatus, and/or payload. Such information from a payload may include data captured by the payload or a sensed state of the payload.
In some embodiments, the movable object 104 and/or payload 124 can include one or more processors, such as CPUs, GPUs, field programmable gate arrays (FPGAs) , system on chip (SoC) , application-specific integrated circuit (ASIC) , or other processors and/or accelerators. As discussed, the payload may include various sensors integrated into a single payload, such as a LiDAR sensor, one or more cameras, an inertial navigation system, etc. The payload can collect sensor data that is used to provide LiDAR-based mapping for various applications, such as construction, surveying, target inspection, etc.
In various embodiments, during a mapping mission, sensor data may be obtained by real-time visualization application 128 from the payload 124, e.g., via connection 106 or through another connection between the mobile device 110 and the movable object 104 and/or payload 124. In some embodiments, the connection may be intermediated by one or more other networks and/or systems. For example, the movable object or payload may connect to a server in a cloud computing system, a satellite, or other communication system, which may provide the sensor data to the mobile device 110. When the real-time visualization application 128 receives the sensor data from the payload, it can store the data in a plurality of dynamic buffers (e.g., circular buffers comprising a plurality of buffer blocks) in the memory of the mobile device. The dynamic buffers may include non-contiguous memory space in the mobile device. Additionally, blocks can be added or removed from the dynamic buffers, allowing the size of the dynamic buffers to be adjusted dynamically. Since a large amount of point cloud data may be divided into small blocks in different sizes and be stored non-contiguously to any empty buffer blocks, all empty space of the memory buffer may be filled and fully utilized. Therefore, the features provided in the disclosure including allocating point cloud data to buffer blocks with dynamic size and storing point cloud data non-contiguously may optimize memory management of the system and provide lower latency for rendering large scale point cloud data in real-time.
Conventional systems allocate buffers contiguously, so if an application on a mobile device requests 10 mb of memory, and there are not 10 contiguous megabytes of memory available, an error will be returned. As such, for the large of amount point cloud data to be rendered in real-time, it requires a large portion of continuous memory blocks being allocated to store the data. It also requires more time for the large amount of data being written/read from the contiguous memory sequentially, comparing to data blocks that are stored non-contiguously and that can be written/read from the non-contiguous memory in parallel with lower latency. This makes the storage and rendering of large amounts of data, such as the mapping data to be visualized, on mobile devices difficult, especially in real-time. As such, conventional systems make rendering a large amount of point cloud data non-efficient for both memory and time management of the system.
Additionally, the real-time visualization application 128 can store point data in dynamic point buffers and color data in dynamic color buffers. In some embodiments, the sensor data may also be stored to disk on the mobile device 110. By storing to disk, the sensor data can be recovered if the data stored in memory fails, is corrupted, etc. Once stored, the real-time visualization application 128 can then generate a real-time visualization of the sensor data. In some embodiments, the sensor data may be received in batches that represent a particular amount of time. For example, each batch of sensor data may represent 300 milliseconds of  scanning data or scanning data in any amount of time, such as in an amount of time of 10 milliseconds to 1 second. The real-time visualization application 128 can retrieve all or a portion of the sensor data to generate the visualization. In some embodiments, the real-time visualization application 128 can additionally enable measurements of objects, distances, etc., in the point cloud visualization to be obtained.
As discussed, the sensor data can include scanning data obtained from a LiDAR sensor or other sensor that provides high resolution scanning of a target environment, pose data indicating the attitude of the payload when the scanning data was obtained (e.g., from an inertial measurement unit) , and positioning data from a positioning sensor (e.g., a GPS module, RTK module, or other positioning sensor) , where the sensors providing the sensor data are all incorporated into a single payload 124. In some embodiments, various sensors incorporated into the single payload 124 can be pre-calibrated based on extrinsic and intrinsic parameters of the sensors and synchronized based on a reference clock signal shared among the various sensors. The reference clock signal may be generated by time circuitry associated with one of the various sensors or a separate time circuitry connecting the various sensors. In some embodiments, the positioning data from the positioning sensor may be updated based on correction data received from a positioning sensor of the movable object 104 which may be included in functional modules 108, sensing system 118, or a separate module coupled to movable object 104 which provides positioning data for the movable object. In some embodiments, the scanning data can be geo-referenced using the positioning data and used to construct the map of the target environment.
Additional details of the movable object architecture are described below with respect to FIG. 2.
FIG. 2 illustrates an example 200 of a movable object architecture in a movable object environment, in accordance with various embodiments. As shown in FIG. 2, a movable object 104 can include a flight controller 114 that communicates with payload 124 via adapter apparatus 122. Additionally, the flight controller can communicate with various functional modules 108 onboard the movable object. As discussed further below, the adapter apparatus 122 can facilitate communication between the flight controller and the payload via a high bandwidth connection, such as Ethernet or universal serial bus (USB) . The adapter apparatus 122 can further provide power to the payload 124.
As shown in FIG. 2, the payload may include a plurality of sensors, including a scanning sensor 202, a monocular camera 204, an RGB camera 206, an inertial navigation system 208 which may include an inertial measurement unit 210 and a positioning sensor 212, one or more processors 214, and one or more storage devices 216. For example, scanning sensor 202 may  include a LiDAR sensor. The LiDAR sensor may provide high resolution scanning data of a target environment. Various LiDAR sensors may be incorporated into the payload, having various characteristics. For example, the LiDAR sensor may have a field of view of approximately 70 degrees and may implement various scanning patterns, such as a seesaw pattern, an elliptical pattern, a petal pattern, etc. In some embodiments, a lower density LiDAR sensor can be used in the payload, as higher density point clouds require additional processing time. In some embodiments, the payload may implement its components (e.g., one or more processors, one or more cameras, the scanning sensor, INS, etc. ) on a single embedded board. The payload may further provide thermal management for the components.
The payload may further include a greyscale monocular camera 204. The monocular camera 204 may include a mechanical shutter that is synchronized with the inertial navigation system (INS) 208 such that when an image is captured by the monocular camera, the attitude of the payload at that moment is associated with the image data. This enables visual features (walls, corners, points etc. ) to be extracted from image data captured by the monocular camera 204. For example, the visual features that are extracted can be associated with a pose-timestamp signature that is generated from the attitude data produced by the INS. Using the pose-timestamped feature data, visual features can be tracked from one frame to another, which enables a trajectory of the payload (and as a result, the movable object) to be generated. This allows for navigation in areas with limited signal from satellite-based positioning sensors, such as indoors or when RTK data is weak or otherwise unavailable. In some embodiments, the payload may further include an RGB camera 206. The RGB camera can collect live image data that is streamed to the mobile device 110 while the movable object is in flight. For example, the user can select whether to view image data collected by one or more cameras of the movable object or the RGB camera of the payload through a user interface of the mobile device 110. Additionally, color data can be obtained from image data collected by the RGB camera and overlaid on the point cloud data collected by the scanning sensor. This provides improved visualizations of the point cloud data that more closely resemble the actual objects in the target environment being scanned.
As shown in FIG. 2, the payload can further include an inertial navigation system 208. The INS 208 can include an inertial measurement unit 210 and optionally a positioning sensor 212. The IMU 210 provides the attitude of the payload which can be associated with the scanning data and image data captured by the scanning sensor and cameras, respectively. The positioning sensor 212 may use a global navigation satellite service (GNSS) , such as GPS, GLOSNASS, Galileo, BeiDou, etc. In some embodiments, the positioning data collected by the positioning sensor 212 may be enhanced using an RTK module 218 onboard the movable object to enhance positioning data collected by INS 208. In some embodiments, RTK information can be received  wirelessly from one or more base stations. The antenna of the RTK module 218 and the payload are separated by a fixed distance on the movable object, allowing the RTK data collected by the RTK module 218 to be transformed into the IMU frame of the payload. Alternatively, the payload 124 may not include its own positioning sensor 212 and instead may rely on a positioning sensor and RTK module 218 of the movable object, e.g., included in functional modules 108. For example, positioning data may be obtained from the RTK module 218 of the movable object 104 and may be combined with the IMU data. The positioning data obtained from the RTK module 218 can be transformed based on the known distance between the RTK antenna and the payload.
As shown in FIG. 2, the payload can include one or more processors 214. The one or more processors may include an embedded processor that includes a CPU and DSP as an accelerator. In some embodiments, other processors may be used such as GPUs, FPGAs, etc. In some embodiments, the processor 214 can geo-reference the scanning data using the INS data. In some embodiments, the geo-referenced scanning data is downsampled to a lower resolution before being sent to the mobile device 110 for visualization. In some embodiments, payload communication manager 230 can manage downsampling and other data settings for a plurality of mobile devices that connect to the payload. In some embodiments, different mobile devices may be associated with preference data maintained by the payload communication manager 230, where the preference data indicates how the mapping data is to be prepared and/or sent to that mobile device. For example, communication protocol settings, channel settings, encryption settings, transfer rate, downsampling settings, etc.
In some embodiments, the processor (s) 214 can also manage storage of the sensor data to one or more storage devices 216. The storage device (s) can include a secure digital (SD) card or other removable media, a solid-state drive (SSD) , an eMMC, and/or a memory. In some embodiments, the processor can also be used to perform visual inertial odometry (VIO) using the image data collected by the monocular camera 204. This may be performed in real-time to calculate the visual features which are then stored in a storable format (not necessarily as images) . In some embodiments, log data may be stored to an eMMC and debugging data can be stored to an SSD. In some embodiments, the processor (s) can include an encoder/decoder built in for processing image data captured by the RGB camera.
The flight controller 114 can send and receive data to and from the remote control via communication system 120B. Flight controller 114 can connect to various functional modules 108, such as RTK module 218, IMU 220, barometer 222, or magnetometer 224. In some embodiments, communication system 120B can connect to other computing devices instead of,  or in addition to, flight controller 114. In some embodiments, sensor data collected by the one or more functional modules 108 can be passed from the flight controller 114 to the payload 124.
During a mapping mission, the user can receive data from, and provide commands to, the UAV using a real-time visualization application 128 on mobile device 110. In some embodiments, the mobile device 110 can include a mobile communication manager 226 which facilitates communication with the movable object 104 and/or payload 124. The mobile communication manager 226 can act as an abstraction layer between the payload 124 and the mobile device 110. For example, the mobile communication manager 226 can receive sensor data produced by the payload in a first format and provide it to the real-time visualization application 128 in a second format. Additionally, or alternatively, the mobile communication manager 226 can manage the specific interfaces provided by the payload and/or sensors of the payload (e.g., the particular methods required to request sensor data, subscribe to sensor data streams, etc. ) . This way, the real-time visualization application 128 does not have to be separately configured to receive data from every possible sensor of the payload. Additionally, if the sensors of the payload are changed, the real-time visualization application 128 does not have to be reconfigured to consume data from the new sensors. Additionally, the mobile device 110 can include a user interface manager 228. The user interface manager 228 can receive data from the mobile communication manager 226 and initiate workflows as needed to prepare the data for the real-time visualization application. Additionally, the user interface manager 228 can obtain commands from the real-time visualization application 128 (for example, to change the view of the visualization, move the payload and/or movable object, etc. ) and generate corresponding commands to be processed by the movable object, adapter apparatus, and/or payload.
FIG. 3 illustrates an example of mobile device and payload architectures, in accordance with various embodiments. As discussed, a payload 124 may be configured to communicate with a plurality of different mobile devices. A payload communication manager 230 can support one or more payload communication protocols 300 to be used to communicate mapping data, command data, and/or telemetry data with one or more mobile devices. For example, point cloud data may be communicated using a first protocol while telemetry data or other command data may be communicated with a second protocol. Alternatively, a single protocol may be used to communicate all data exchanging between the payload 124 and the mobile device 110. Additionally, payload communication manager 230 can maintain mobile device preferences 302. Different mobile devices, or types of mobile devices, may be associated with different computing resources (e.g., processors, memory, disk space, etc. ) which may determine what, or how much, data is compatible with the mobile device. Payload communication manager 230 can use the mobile device preferences 302 to determine how and whether to prepare data for sending  to a particular mobile device. For example, payload communication manager 230 can use the mobile device preferences 302 to manage communication protocol settings, channel settings, encryption settings, transfer rate, downsampling settings, etc. to facilitate communication with the particular mobile device 110 with which it is communicating.
As discussed, the mobile communication manager 226 can receive sensor data produced by the payload and make the sensor data available to a user interface manager 228. When the sensor data includes mapping data (e.g., point cloud data) , the user interface manager 228 can invoke visualization manager 304. Visualization manager 304 can process the mapping data and generate a real-time rendering of the mapping data. The user interface manager 228 can obtain the real-time rendering of the mapping data and provide it to the real-time visualization application 128 to be rendered on a display of the mobile device 110. Mapping data may be received throughout a mapping mission. As additional mapping data is received, the user interface manager 228 can pass the new mapping data to the visualization manager 304 which can process the new mapping data and update the real-time rendering of the mapping data. This may continue throughout the mapping mission. In embodiments, the visualization manager 304 may use dynamic buffers (e.g., circular buffers comprising a plurality of buffer blocks) to optimize memory management and reduce computation load for caching and passing mapping data (e.g., point cloud data) to be rendered on the real-time visualization application 128 through the user interface manager 228. The mapping data may be stored in non-contiguous buffer blocks of the dynamic buffers when new mapping data is received through a mapping mission. In circumstances where the empty buffer blocks of the dynamic buffers are all filled, new mapping data received through the mapping mission may overwrite old mapping data in a size of the buffer block, such that uninterrupted and real-time visualization may be achieved with substantially low latency.
The user can interact with the real-time visualization through the real-time visualization application 128. For example, the user may use gestures (e.g., finger gestures, eye gestures indicating eye movements, head gestures indicating head movement, or other gestures indicating a movement of the user’s body parts) to interact with the real-time visualization (e.g., a translation, rotation, tilt, zoom, etc. ) . The user may also provide commands to be executed by the movable object, payload, and/or adapter apparatus based on the real-time visualization. For example, the user may start or stop recording of the mapping information. Additionally, or alternatively, the user may determine that the target area has not been mapped completely and the user may direct the movable object to the unmapped area (e.g., by instructing the movable object to change its position in the target environment and/or to change the direction in which the payload is oriented) . These commands can be received by the real-time visualization  application 128 based on user input to the mobile device 110 and the commands may be passed to the user interface manager 228. In some embodiments, the user interface manager 228 can translate the commands into movable object, payload, and/or adapter apparatus commands. For example, a movement commands may be converted from a user interface coordinate system into a movable object coordinate system, world coordinate system, or other coordinate system and used to generate commands that can be executed by a flight controller, the payload, adapter apparatus, etc. to move the movable object and/or payload appropriately. In some embodiments, the commands may be to change the view of the point cloud visualization. The user interface manager 228 can obtain the commands from the real-time visualization application 128 and instruct the visualization manager 304 to update the visualization based on the commands.
In various embodiments, user interface manager 228 can make a variety of user interface widgets available for the real-time visualization application. For example, the real-time visualization application 128 can include a Lidar Settings widget for changing the settings of the lidar like sampling rate, scan mode, echo mode etc. and a Point Cloud Record widget which triggers start, stop, pause and resume of the recording of the point cloud on the payload. Once recording is enabled, the collected point cloud data is written to a file on the payload’s storage system (e.g., fixed media, such as a hard disk drive, removable media, such as an SD card, etc. ) . In some embodiments, the real-time visualization application 128 can include a Point cloud checklist widget which indicates a current status of the payload and can indicate that data can be recorded correctly, and all the systems are ready to start recording.
In some embodiments, the real-time visualization application 128 can include a LiDAR map widget which includes the 3D point cloud visualization that displays the point cloud and enables the visualization to be moved (e.g., reoriented, moved to a new location in the target environment, etc. ) according to the user’s inputs using gestures through the movement of the movable object or the payload and/or the movement of the payload’s adapter apparatus. In some embodiments, the LiDAR map widget includes a communication layer to the payload and includes data de-serializers for the point cloud data received from the payload. In some embodiments, the real-time visualization application 128 can include a Perspective modes widget which enables the user to quickly change the perspective of the point cloud to view the front, side or top views of the model being generated in real time. In some embodiments, the real-time visualization application 128 can include a Throughput widget which gives the information about the point cloud like number of points, size of model etc. In some embodiments, the real-time visualization application 128 can include a Lidar map scale widget which provides the scale of the point cloud as the user zooms in/out. Additionally, in some  embodiments, the real-time visualization application 128 can include a Point cloud playback widget which is used to playback a point cloud file that was previously recorded.
FIG. 4 illustrates an example of an adapter apparatus in a movable object environment, in accordance with various embodiments. As shown in FIG. 4, an adapter apparatus 122 enables a payload 124 to be connected to a movable object 104. In some embodiments, the adapter apparatus 122 is a Payload Software Development Kit (SDK) adapter plate, an adapter ring and the like. The payload 124 can be connected to the adapter apparatus 122, and the adapter apparatus can be coupled with the fuselage of the movable object 104. In some embodiments, adapter apparatus may include a quick release connector to which the payload can be attached/detached.
When the payload 124 is connected to the movable object 104 through the adapter apparatus 122, the payload 124 can also be controlled by a mobile device 110 via a remote control 111. As shown in FIG. 4, the remote control 111 and/or real-time visualization application 128 can send a control instruction through a command channel between the remote control and the communication system of the movable object 104. The control instruction can be transmitted to control the movable object 104 and/or the payload 124. For example, the control instruction may be used for controlling the attitude of the payload, to selectively view live data being collected by the payload (e.g., real-time low density mapping data, image data, etc. ) on the mobile device, etc.
As shown in FIG. 4, after the communication system of the movable object 104 receives the control instruction, the control instruction is sent to the adapter apparatus 122, the communication protocol between the communication system and the adapter apparatus of the movable object is may be referred to as an internal protocol, and the communication protocol between the adapter apparatus and the payload 124 may be referred to as an external protocol. In an embodiment, an internal protocol between the communication system of the movable object 104 and the adapter apparatus 122 is recorded as a first communication protocol, and an external protocol between the adapter apparatus 122 and the payload 124 is recorded as a second communication protocol. After the communication system of the movable object receives the control instruction, a first communication protocol is adopted to send the control instruction to the adapter apparatus through a command channel between the communication system and the adapter apparatus.
When the adapter apparatus receives the control instruction sent by the movable object using the first communication protocol, the internal protocol between the communication system of the movable object and the adapter apparatus is converted into an external protocol between the adapter apparatus and the payload 124. In some embodiments, the internal protocol can be  converted into the external protocol by the adapter apparatus by adding a header conforming to the external protocol in the outer layer of the internal protocol message, so that the internal protocol message is converted into an external protocol message.
As shown in FIG. 4, the communication interface between the adapter apparatus and the payload 124 may include a Controller Area Network (CAN) interface or a Universal Asynchronous Receiver/Transmitter (UART) interface. After the adapter apparatus converts the internal protocol between the communication system of the movable object and the adapter apparatus into an external protocol between the adapter apparatus and the payload 124, the control instruction is sent to the payload 124 through the CAN interface or the UART interface by using an external protocol.
As discussed, the payload 124 can collect sensor data from a plurality of sensors incorporated into the payload, such as a LiDAR sensor, one or more cameras, an INS, etc. The payload 124 can send sensor data to the adapter apparatus through a network port between the payload 124 and the adapter apparatus. Alternatively, the payload 124 may also send sensor data through a CAN interface or a UART interface between the payload 124 and the adapter apparatus. Optionally, the payload 124 sends the sensor data to the adapter apparatus through the network port, the CAN interface or the UART interface using a second communication protocol, e.g., the external protocol.
After the adapter apparatus receives the sensor data from the payload 124, the adapter apparatus converts the external protocol between the adapter apparatus and the payload 124 into an internal protocol between the communication system of the movable object 104 and the adapter apparatus. In some embodiments, the adapter apparatus uses an internal protocol to send sensor data to a communication system of the movable object through a data channel between the adapter apparatus and the movable object. Further, the communication system sends the sensor data to the remote control 111 through the data channel between the movable object and the remote control 111, and the remote control 111 forwards the sensor data to the mobile device 110.
After the adapter apparatus receives the sensor data sent by the payload 124, the sensor data can be encrypted to obtain encrypted data. Further, the adapter apparatus uses the internal protocol to send the encrypted data to the communication system of the movable object through the data channel between the adapter apparatus and the movable object, the communication system sends the encrypted data to the remote control 111 through the data channel between the movable object and the remote control 111, and the remote control 111 forwards the encrypted data to the mobile device 110.
In some embodiments, the payload 124 can be mounted on the movable object through the adapter apparatus. When the adapter apparatus receives the control instruction for controlling the payload 124 sent by the movable object, the internal protocol between the movable object and the adapter apparatus is converted into an external protocol between the adapter apparatus and the payload 124, and the control instruction is sent to the payload 124 by adopting an external protocol, so that the third-party device produced by the third-party manufacturer can communicate with the movable object normally through the external protocol, so that the movable object can support the third-party device, and the application range of the movable object is improved.
In some embodiments, to facilitate communication with the payload, the adapter apparatus sends a handshake instruction to the payload 124, and the handshake instruction is used for detecting whether the adapter apparatus and the payload 124 are in normal communication connection or not. In some embodiments, the adapter apparatus can also send a handshake instruction to the payload 124 periodically or at arbitrary times. If the payload 124 does not answer, or the response message of the payload 124 is wrong, the adapter apparatus can disconnect the communication connection with the payload 124, or the adapter apparatus can limit the functions available to the payload.
The adapter apparatus may also comprise a power interface, and the power interface is used for supplying power to the payload 124. As shown in FIG. 4, the movable object can supply power to the adapter apparatus, the adapter apparatus can further supply power to the payload 124, the adapter apparatus may include a power interface through which the adapter apparatus supplies power to the payload 124. In various embodiments, the communication interface between the movable object and the adapter apparatus may include a Universal Serial Bus (USB) interface.
As shown in FIG. 4, the data channel between the communication system of the movable object and the adapter apparatus can be implemented using a USB interface. In some embodiments, the adapter apparatus can convert the USB interface into a network port, such as an Ethernet port. The payload 124 can carry out data transmission with the adapter apparatus through the network port, so that the payload 124 can conveniently use the transmission control protocol to communicate with the adapter apparatus for network communication without requiring a USB driver.
In some embodiments, the interface externally output by the movable object comprises a CAN port, a USB port and a 12V 4A power supply port. The CAN interface, the USB port and the 12V 4A power port are respectively connected with the adapter apparatus, the CAN port, the  USB port and the 12V 4A power port are subjected to protocol conversion by the adapter apparatus, and a pair of external interfaces can be generated.
FIG. 5 illustrates an example 500 of generating a real-time visualization of mapping data by a mobile device, in accordance with various embodiments. As discussed, visualization manager 304 can render a real-time visualization of mapping data that does not require the mapping data to first be post-processed into a file. The visualization manager 304 can be implemented using a graphics application programming interface (API) , such as OpenGL 2.0, or other graphics APIs, libraries, etc. When a visualization request is received (e.g., from user interface manager 228, as described above) , the request can be received by a map view manager 502 which coordinates the various modules of the visualization manager 304 to generate the real-time visualization.
For example, when mapping data is received, the map view manager 502 can instruct storage manager 504 to store the mapping data to memory 506. The mapping data can require a significant amount of memory for storage which may not be available as a contiguous block of memory. As such, dynamic indexing system 508 can allocate one or more dynamic circular (or ring) buffers 510 that comprise a plurality of non-contiguous blocks of memory. As data is written to empty buffer blocks, the dynamic indexing system 504 maintains an index of cursors to the blocks where the data is stored. In some embodiments, the storage manager 504 can store some data into an overview buffer 512. For example, points of the mapping data having an intensity value greater than a threshold may be stored to the overview buffer 512. In some embodiments, point data having an intensity value less than or equal to the threshold value may be randomly sampled and added to the overview buffer 512, such that the points with greater intensity values that are selected and stored first may be smoothed when rendering on a display of the mobile device. The overview buffer 512 can be of a fixed memory size. During rendering, the overview buffer 512 can be rendered first. In some embodiments, if the overview buffer 512 is full and new data (e.g., new points) is received that is associated with intensity values higher than the intensity threshold, then the new data can overwrite at least a portion of the old data in the overview buffer. For example, the oldest points may be overwritten by the newest points. Alternatively, a random sampling of older points may be overwritten by the newer points. In some embodiments, if the new data is associated with an intensity value smaller or equal to the intensity threshold then this data is written to the dynamic buffers. If the dynamic buffers are full, then old points in the dynamic buffers are overwritten. As discussed with respect to the overview buffers, the data that is overwritten may correspond to a random sampling of data stored in the dynamic buffers or may correspond to the oldest data stored in the dynamic buffers.
The remaining points of the mapping data can be stored to the dynamic buffers 510. As shown in FIG. 5, in some embodiments, the point data can include point data (e.g., x, y, z point  data) and color data (e.g., RGB values) corresponding to the points. The color data may correspond to colors extracted from image data captured by a visible light camera of the payload, intensity data, height data, or other data being represented by color values. The points can be stored in point buffers 514 and the color data can be stored in color buffers 516. In some embodiments, the storage manager 504 can additionally store the point data in a point cloud data file 518 on disk storage 520 of mobile device 110. As discussed, if the point data in memory is corrupted or otherwise lost, this provides a backup source from which it can be recovered. The point cloud data file 518 can also be used for playback of the visualization at a different point in time (e.g., rather than in real-time) .
Map view manager 502 can additionally manage construction of the real-time visualization by invoking point cloud view manager 522. Point cloud view manager 522 can invoke current view manager 524 which causes the visualization to be displayed based on view settings and current camera position. The current view manager 524 can retrieve the point data from memory (e.g., the point data stored in overview buffer 512 and the dynamic buffers 510, including the point data from point buffer 514 and corresponding color data from color buffer 516) and issue draw calls using the retrieved point data. In some embodiments, the current view manager 524 can access the memory 506 via point cloud painter 526. For example, given the current view settings, point cloud painter 526 can send access to memory 506 (either directly or via storage manager 504, a shown in FIG. 5) and retrieve the appropriate portions of memory including the point data for the current view. In some embodiments, this may include one or more buffer blocks corresponding to the most recently stored point data.
In some embodiments, Camera Controller 530 can calculate model, view and projection (MVP) matrices to enable movement of the point cloud based on input received from the user. As discussed, the user input may include gesture controls. The gesture controls can be identified by UI controller 528. UI controller 528 can include listeners for gesture events related to navigating the point cloud visualization, such as translation with one finger (up/down/left/right) ; rotation about origin with two fingers (up/down/left/right) ; screen tilt with two fingers rotating about the center of the screen; zoom in/out with two-finger pinch; reset view when double-tapping the screen, etc. The MVP matrices can be used to update the visualization. Because the visualization is generated using the in-memory representation of the point data, rather than requiring post-processing the point data into a file, the visualization can be generated very quickly or substantially in real-time. For example, the point data can be provided in batches to visualization manager 304. Upon storing the batch in memory, the visualization can be generated and/or updated to reflect the newly received data in a scale of milliseconds, such as within 100 milliseconds, from the time at which the mapping data is acquired by the scanning  sensor of the payload, transmitted by the payload, received by the mobile device, or stored in the memory of the mobile device. In some embodiments, the time needed to update the visualization from the time at which the new data has been acquired by the scanning sensor of the payload, transmitted by the payload, received by the mobile device, or stored in memory on the mobile device can vary between 10 milliseconds and 1 second.
FIGS. 6A and 6B illustrate data processing based on a circular buffer in a data processing system, in accordance with various embodiments. As discussed, dynamic buffer (s) 510 in FIG. 5 can be implemented as circular buffers 610 with dynamic size. In some embodiment, overview buffer (s) 512 in FIG. 5 can be implemented as circular buffers 610 with fixed size. As shown in Figure 6A, an upstream module, e.g. a data processor A 601, and a downstream module, e.g. a data processor 602, can take advantage of a circular buffer 610 for exchanging data. In embodiments, the upstream data processor A 601 and the downstream data processor B 602 may be the same processor or different processors on the mobile device. In accordance with various embodiments, the circular buffer 610, which may comprise a plurality of buffer blocks is advantageous to buffering data streams, e.g. data blocks of point cloud data, due to its circular topological data structure. In various embodiments, the plurality of buffer blocks may be noncontiguous memory blocks which are indexed using cursors by a dynamic indexing system such that they are treated as a single buffer for optimizing memory management and reducing processing time to write/read mapping data in the memory.
In accordance with various embodiments, a circular buffer management mechanism can be used for maintaining the circular buffer 610. For example, the data processor A 601 can write 621 a point cloud data in blocks into a buffer block 611, which may be referred to as a write block (WR) . Also, the data processor B 602 can read 622 a data block out from a buffer block 612, which may be referred to as a read block (RD) . Additionally, the circular buffer 610 may comprise one or more ready blocks (RYs) stored in one or more buffer blocks. A ready block 613 is written by an upstream module, e.g. the data processor A 601, in a buffer block and has not yet been processed by the downstream module, e.g. the data processor B 602. There can be multiple ready blocks in the circular buffer 610, when the data processor B 602 is lagging behind the data processor A 601 in processing data in the circular buffer 610.
In accordance with various embodiments, the system may reach the optimal state with minimum delay when the downstream module can keep up with the progress of the upstream module. For example, Figure 6B illustrates data processing with low latency based on a circular buffer in a data processing system, in accordance with various embodiments. As shown in Figure 6B, the buffer block 614 in the circular buffer 610 includes a data block, which acts as both the write block for the data processor A 601 and the read block for the data processor B 602.  Both the data processor A 601 and the data processor B 602 may be accessing on the same buffer block 614 simultaneously. For example, the data processor A 601 may be writing 623 data of a data block in the buffer block 614 while the data processor B 602 is reading 624 data out from the buffer block 614.
As shown in Figures 6A and 6B, the data processor A 601 can provide the fine granular control information 620 to the data processor B 602, so that the data processor B 602 can keep up with the progress of the data processor A 601. As a result, there may be no ready block in the circular buffer 610 (i.e. the number of ready blocks in the circular buffer 610 is zero) .
In various embodiments, storage manager 504 can write point data to the dynamic buffers sequentially to empty blocks. As data is written, the dynamic indexing system 508 is updated to track where in memory the data has been stored. If the buffer blocks of the circular buffer are filled, then old data is overwritten by new data. In some embodiments, the oldest points are overwritten by the newest points. However, in some embodiments, old points are randomly selected to be overwritten by new points. For example, the dynamic index system can be used to identify blocks associated with the oldest point data and then a portion of those blocks are selected randomly and overwritten with new data. This preserves some of the oldest points without dropping new points.
FIG. 7 illustrates an example of a split screen user interface 700, in accordance with various embodiments. As shown in FIG. 7, the real-time visualization application 128 can include a camera view 702 and a point cloud view 704. The camera view can be captured by a visible light camera of the payload and the point cloud view can be a visualization generated based on point cloud data collected by a LiDAR sensor of the payload. These views can be synchronized such that the views are of approximately the same field of view at the same points in time. In some embodiments, this split screen view does not allow for independent control of the point cloud representation (e.g., via gesture based inputs) . The user can select a different view via  icons  706 and 708, which allow for a selection of a view only of the visible light camera or only the point cloud data, respectively. Additionally, in various embodiments, the user interface can include a recording widget 712 which can be selected to enable/disable recording. Additionally, the user interface can include a photo-video switching widget which can be selected to switch the imaging device between photo shooting and video recording. Additionally, an FPV view 714 can show a first person view from the perspective of the movable object via a visible light camera integrated into the movable object.
In some embodiments, in addition to the recording widget or the phot-video switching widget, the user interface can enable playback of previously stored visualizations, enable  start/stop storing the mapping data, change settings of the LiDAR, such as sampling rate, scan mode, echo mode, etc.
FIG. 8 illustrates an example of user interface 800, in accordance with various embodiments. As shown in FIG. 8, the user can select the point cloud view icon 708 which causes the user interface to display only the point cloud view visualization. While in the point cloud visualization view, the user can interact with the visualization using gestures. For example, the user can select a point 802 (e.g., by tapping on a touchscreen interface) and dragging their finger to point 804 to cause a translation of the point cloud visualization. Other gestures, as discussed, may include rotation about an origin with two fingers (up/down/left/right) ; screen tilt with two fingers rotating about the center of the screen; zoom in/out with two-finger pinch; reset view when double-tapping the screen, etc. In the example of FIG. 8, the point cloud visualization has been overlaid with color information representing the LiDAR intensity associated with the points (e.g., via selection of icon 806) . The user may also select different overlays, such as height 808 or visible light 810.
FIG. 9 illustrates an example 900 of overlaying data values in mapping data, in accordance with various embodiments. As shown in FIG. 9, overlay information 902 can be obtained from the RGB camera or other sensors incorporated into the payload. For example, in some embodiments, the overlay data can include color data which may include pixel values of various color schemes (e.g., 16-bit, 32-bit, etc. ) . The color data can be extracted from one or more images captured by the RGB camera at the same time as the point cloud data was captured by the scanning sensor and these color values can be overlaid on the visualization of the point cloud data. Although depicted in FIG. 9 as grayscale, the color data may include various color values depending on the color values of the image data captured by the RGB camera.
In some embodiments, the overlay data can include height above a reference plane. For example, a color value may be assigned to points depending on their height above the reference plane. The height values may be relative height values, relative to the reference plane, or absolute height values (e.g., relative to sea level) . The reference plane may correspond to the ground, floor, or an arbitrary plane selected by the user. The values may vary monochromatically as the height changes or may change colors as the height changes.
In some embodiments, the overlay data can represent intensity values. The intensity values may correspond to a return strength of the laser beam as received by the LiDAR sensor. The intensity values may indicate material composition or characteristics of objects in the target environment. For example, based on the reflectivity of the material, characteristics of the material can be inferred (e.g., type of material, such as metal, wood, concrete, etc. ) and the overlay information may indicate these characteristics through different color values assigned to  the different characteristics. Additionally, or alternatively, in some embodiments, the point cloud data can be overlaid on a map of the target area being scanned. For example, the point cloud data can be overlaid on a two dimensional or three dimensional map provided by a mapping service.
FIG. 10 illustrates an example of supporting a movable object interface in a software development environment, in accordance with various embodiments. As shown in FIG. 10, a movable object interface 1003 can be used for providing access to a movable object 1001 in a software development environment 1000, such as a software development kit (SDK) environment. As used herein, the SDK can be an onboard SDK implemented on an onboard environment that is coupled to the movable object 1001. The SDK can also be a mobile SDK implemented on an off-board environment that is coupled to a mobile device or a mobile device. Furthermore, the movable object 1001 can include various functional modules A-C 1011-1013, and the movable object interface 1003 can include different interfacing components A-C 1031-1033. Each said interfacing component A-C 1031-1033 in the movable object interface 1003 corresponds to a module A-C 1011-1013 in the movable object 1001. In some embodiments, the interfacing components may be rendered on a user interface of a display of a mobile device or other computing device in communication with the movable object. In such an example, the interfacing components, as rendered, may include selectable command buttons for receiving user input/instructions to control corresponding functional modules of the movable object.
In accordance with various embodiments, the movable object interface 1003 can provide one or more callback functions for supporting a distributed computing model between the application and movable object 1001.
The callback functions can be used by an application for confirming whether the movable object 1001 has received the commands. Also, the callback functions can be used by an application for receiving the execution results. Thus, the application and the movable object 1001 can interact even though they are separated in space and in logic.
As shown in FIG. 10, the interfacing components A-C 1031-1033 can be associated with the listeners A-C 1041-1043. A listener A-C 1041-1043 can inform an interfacing component A-C 1031-1033 to use a corresponding callback function to receive information from the related module (s) .
Additionally, a data manager 1002, which prepares data 1020 for the movable object interface 1003, can decouple and package the related functionalities of the movable object 1001. The data manager 1002 may be onboard, that is coupled to or located on the movable object 1001, which prepares the data 1020 to be communicated to the movable object interface 1003 via communication between the movable object 1001 and a mobile device or a mobile device.  The data manager 1002 may be off board, that is coupled to or located on a mobile device, which prepares data 1020 for the movable object interface 1003 via communication within the mobile device. Also, the data manager 1002 can be used for managing the data exchange between the applications and the movable object 1001. Thus, the application developer does not need to be involved in the complex data exchanging process.
For example, the onboard or mobile SDK can provide a series of callback functions for communicating instant messages and for receiving the execution results from a movable object. The onboard or mobile SDK can configure the life cycle for the callback functions in order to make sure that the information interchange is stable and completed. For example, the onboard or mobile SDK can establish connection between a movable object and an application on a smart phone (e.g. using an Android system or an iOS system) . Following the life cycle of a smart phone system, the callback functions, such as the ones receiving information from the movable object, can take advantage of the patterns in the smart phone system and update the statements accordingly to the different stages in the life cycle of the smart phone system.
FIG. 11 illustrates an example of a movable object interface, in accordance with various embodiments. As shown in FIG. 11, a movable object interface 1103 can be rendered on a display of a mobile device or other computing devices representing statuses of different components of a movable object 1101. Thus, the applications, e.g., APPs 1104-1106, in the movable object environment 1100 can access and control the movable object 1101 via the movable object interface 1103. As discussed, these apps may include an inspection app 1104, a viewing app 1105, and a calibration app 1106.
For example, the movable object 1101 can include various modules, such as a camera 1111, a battery 1112, a gimbal 1113, and a flight controller 1114.
Correspondently, the movable object interface 1103 can include a camera component 1121, a battery component 1122, a gimbal component 1123, and a flight controller component 1124 to be rendered on a computing device or other computing devices to receive user input/instructions by way of using the APPs 1104-1106.
Additionally, the movable object interface 1103 can include a ground station component 1126, which is associated with the flight controller component 1124. The ground station component operates to perform one or more flight control operations, which may require a high-level privilege.
FIG. 12 illustrates an example of components for a movable object in a software development kit (SDK) , in accordance with various embodiments. As shown in FIG. 12, the drone class 1201 in the SDK 1200 is an aggregation of other components 1202-1207 for a movable object (e.g., a drone) . The drone class 1201, which have access to the other components  1202-1207, can exchange information with the other components 1202-1207 and controls the other components 1202-1207.
In accordance with various embodiments, an application may be accessible to only one instance of the drone class 1201. Alternatively, multiple instances of the drone class 1201 can present in an application.
In the SDK, an application can connect to the instance of the drone class 1201 in order to upload the controlling commands to the movable object. For example, the SDK may include a function for establishing the connection to the movable object. Also, the SDK can disconnect the connection to the movable object using an end connection function. After connecting to the movable object, the developer can have access to the other classes (e.g. the camera class 1202, the battery class 1203, the gimbal class 1204, and the flight controller class 1205) . Then, the drone class 1201 can be used for invoking the specific functions, e.g. providing access data which can be used by the flight controller to control the behavior, and/or limit the movement, of the movable object.
In accordance with various embodiments, an application can use a battery class 1203 for controlling the power source of a movable object. Also, the application can use the battery class 1203 for planning and testing the schedule for various flight tasks. As battery is one of the most restricted elements in a movable object, the application may seriously consider the status of battery not only for the safety of the movable object but also for making sure that the movable object can finish the designated tasks. For example, the battery class 1203 can be configured such that if the battery level is low, the movable object can terminate the tasks and go home outright. For example, if the movable object is determined to have a battery level that is below a threshold level, the battery class may cause the movable object to enter a power savings mode. In power savings mode, the battery class may shut off, or reduce, power available to various components that are not integral to safely returning the movable object to its home. For example, cameras that are not used for navigation and other accessories may lose power, to increase the amount of power available to the flight controller, motors, navigation system, and any other systems needed to return the movable object home, make a safe landing, etc.
Using the SDK, the application can obtain the current status and information of the battery by invoking a function to request information from in the Drone Battery Class. In some embodiments, the SDK can include a function for controlling the frequency of such feedback.
In accordance with various embodiments, an application can use a camera class 1202 for defining various operations on the camera in a movable object, such as an unmanned aircraft. For example, in SDK, the Camera Class includes functions for receiving media data in SD card, getting &setting photo parameters, taking photo and recording videos.
An application can use the camera class 1202 for modifying the setting of photos and records. For example, the SDK may include a function that enables the developer to adjust the size of photos taken. Also, an application can use a media class for maintaining the photos and records.
In accordance with various embodiments, an application can use a gimbal class 1204 for controlling the view of the movable object. For example, the Gimbal Class can be used for configuring an actual view, e.g. setting a first personal view of the movable object. Also, the Gimbal Class can be used for automatically stabilizing the gimbal, in order to be focused on one direction. Also, the application can use the Gimbal Class to change the angle of view for detecting different objects.
In accordance with various embodiments, an application can use a flight controller class 1205 for providing various flight control information and status about the movable object. As discussed, the flight controller class can include functions for receiving and/or requesting access data to be used to control the movement of the movable object across various regions in a movable object environment.
Using the Flight Controller Class, an application can monitor the flight status, e.g. using instant messages. For example, the callback function in the Flight Controller Class can send back the instant message every one thousand milliseconds (1000 ms) .
Furthermore, the Flight Controller Class allows a user of the application to investigate the instant message received from the movable object. For example, the pilots can analyze the data for each flight in order to further improve their flying skills.
In accordance with various embodiments, an application can use a ground station class 1207 to perform a series of operations for controlling the movable object.
For example, the SDK may require applications to have an SDK-LEVEL-2 key for using the Ground Station Class. The Ground Station Class can provide one-key-fly, on-key-go-home, manually controlling the drone by app (i.e. joystick mode) , setting up a cruise and/or waypoints, and various other task scheduling functionalities.
In accordance with various embodiments, an application can use a communication component for establishing the network connection between the application and the movable object.
FIG. 13 shows a flowchart of a method 1300 of generating a real-time visualization of mapping data on a mobile device in a movable object environment, in accordance with various embodiments. At operation/step 1302, the method can include obtaining mapping data from a scanning sensor. In some embodiments, the scanning sensor includes a light detection and ranging (LiDAR) sensor. In some embodiments, the mapping data includes point cloud data  collected by the LiDAR sensor including a plurality of points and corresponding color data. In some embodiments, the movable object is an unmanned aerial vehicle (UAV) .
At operation/step 1304, the method can include storing the mapping data in one or more dynamic buffers in a memory of a mobile device, the dynamic buffers comprising non-contiguous portions of the memory. In some embodiments, the dynamic buffers include color buffers to store the corresponding color data and point buffers to store the plurality of points. The plurality of points can be georeferenced using an inertial navigation system. In some embodiments, the real-time visualization is a colorized point cloud based on the color data and the point cloud data. For example, the color data can be obtained from at least one visible light camera from the one or more cameras. In some embodiments, the color data represents intensity variation or height variation.
At operation/step 1306, the method can include generating a real-time visualization of the mapping data stored in the dynamic buffers. In some embodiments, the dynamic buffers are circular buffers comprising a plurality of buffer blocks, wherein the mapping data is written sequentially to empty buffer blocks indexed by a dynamic indexing system. When the dynamic buffers are filled, new point data overwrites a random sampling of old point data in the dynamic buffers. In some embodiments, the real-time visualization is generated within 100 milliseconds of storage of the mapping data in the one or more dynamic buffers.
At operation/step 1308, the method can include rendering the real-time visualization of the mapping data on a display coupled to the mobile device. In some embodiments, rendering the real-time visualization further includes rendering a synchronized side-by-side view of image data obtained from at least one of the one or more cameras and the real-time visualization of the mapping data.
In some embodiments, the method can further include storing points from the mapping data associated with intensity values that exceed a threshold in an overview buffer, wherein the overview buffer is of fixed size and rendering the points stored in the overview buffer before rending the mapping data stored in the dynamic buffers.
In some embodiments, the method can further include receiving a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data, converting the gesture-based input into one or more movement commands for at least one of the payload, the movable object, or an adapter apparatus coupling the payload to the movable object, and sending the one or more movement commands to the payload, the movable object, or the adapter apparatus.
In some embodiments, the method can further include rendering an updated real-time visualization of updated mapping data on the display coupled to the mobile device, the updated  mapping data received during execution of the one or more movement commands by the payload or the movable object.
In some embodiments, the method can further include receiving an input via the display coupled to the mobile device, the input to switch view modes between a camera and point cloud side-by-side view, a point cloud view, or a camera view.
In some embodiments, the method can further include receiving a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data, and rendering a manipulated view of the mapping data, the manipulated view including one of a translation, rotation, tilt, or zoom relative to the current view of the mapping data.
In some embodiments, the method can further include storing a copy of the mapping data to disk as a file on the mobile device. The copy of the mapping data is used to recover lost mapping data in memory or to generate a playback visualization of the mapping data.
Many features can be performed in, using, or with the assistance of hardware, software, firmware, or combinations thereof. Consequently, features may be implemented using a processing system (e.g., including one or more processors) . Exemplary processors can include, without limitation, one or more general purpose microprocessors (for example, single or multi-core processors) , application-specific integrated circuits, application-specific instruction-set processors, graphics processing units, physics processing units, digital signal processing units, coprocessors, network processing units, audio processing units, encryption processing units, and the like.
Features can be implemented in, using, or with the assistance of a computer program product which is a storage medium (media) or computer readable medium (media) having instructions stored thereon/in which can be used to program a processing system to perform any of the features presented herein. The storage medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs) , or any type of media or device suitable for storing instructions and/or data.
Stored on any one of the machine readable medium (media) , features can be incorporated in software and/or firmware for controlling the hardware of a processing system, and for enabling a processing system to interact with other mechanism utilizing the results. Such software or firmware may include, but is not limited to, application code, device drivers, operating systems and execution environments/containers.
Features of the invention may also be implemented in hardware using, for example, hardware components such as application specific integrated circuits (ASICs) and field-programmable gate array (FPGA) devices. Implementation of the hardware state machine so as to perform the functions described herein will be apparent to persons skilled in the relevant art.
Additionally, the present invention may be conveniently implemented using one or more conventional general purpose or specialized digital computer, computing device, machine, or microprocessor, including one or more processors, memory and/or computer readable storage media programmed according to the teachings of the present disclosure. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
While various embodiments have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the invention.
The present invention has been described above with the aid of functional building blocks illustrating the performance of specified functions and relationships thereof. The boundaries of these functional building blocks have often been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Any such alternate boundaries are thus within the scope and spirit of the invention.
The foregoing description has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. The breadth and scope should not be limited by any of the above-described exemplary embodiments. Many modifications and variations will be apparent to the practitioner skilled in the art. The modifications and variations include any relevant combination of the disclosed features. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications that are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalence.
At least some embodiments discussed above can be described in view of the following clauses:
1. A system comprising:
a movable object;
a payload coupled to the movable object, the payload comprising a scanning sensor, one or more cameras, and an inertial navigation system (INS) ; and
a mobile device in communication with the payload, the mobile device including at least one processor and a memory, the memory including instructions which, when executed by the at least one processor, cause the mobile device to:
obtain mapping data from the scanning sensor;
store the mapping data in one or more dynamic buffers in the memory of the mobile device, the dynamic buffers comprising non-contiguous portions of the memory;
generate a real-time visualization of the mapping data stored in the dynamic buffers; and
render the real-time visualization of the mapping data on a display coupled to the mobile device.
2. The system of clause 1, wherein the scanning sensor includes a light detection and ranging (LiDAR) sensor.
3. The system of clause 2, wherein the mapping data includes point cloud data of a plurality of points collected by the LiDAR sensor and corresponding color data.
4. The system of clause 3, wherein the dynamic buffers include color buffers to store the corresponding color data and point buffers to store the point cloud data of the plurality of points.
5. The system of clause 4, wherein the point cloud data of the plurality of points are georeferenced using the INS.
6. The system of clause 3, wherein the real-time visualization is a colorized point cloud based on the color data and the point cloud data.
7. The system of clause 6, wherein the color data is obtained from at least one visible light camera from the one or more cameras.
8. The system of clause 6, wherein the color data represents intensity variation or height variation.
9. The system of clause 4 wherein the dynamic buffers are circular buffers comprising a plurality of buffer blocks, wherein the mapping data is written sequentially to empty buffer blocks indexed by a dynamic indexing system.
10. The system of clause 9, wherein when the dynamic buffers are filled, new point data overwrites a random sampling of old point data in the dynamic buffers.
11. The system of clause 9, wherein the instructions, when executed, further cause the mobile device to:
store point cloud data of points from the mapping data associated with intensity values that exceed a threshold in an overview buffer, wherein the overview buffer is of fixed size; and
render the points stored in the overview buffer before rending the mapping data stored in the dynamic buffers.
12. The system of clause 1, wherein to render the real-time visualization of the mapping data on a display coupled to the mobile device, the instructions, when executed, further cause the mobile device to:
render a synchronized side-by-side view of image data obtained from at least one of the one or more cameras and the real-time visualization of the mapping data.
13. The system of clause 1, wherein the instructions, when executed, further cause the mobile device to:
receive a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data;
convert the gesture-based input into one or more movement commands for at least one of the payload, the movable object, or an adapter apparatus coupling the payload to the movable object; and
send the one or more movement commands to the payload, the movable object, or the adapter apparatus.
14. The system of clause 13, wherein the instructions, when executed, further cause the mobile device to:
render an updated real-time visualization of updated mapping data on the display coupled to the mobile device, the updated mapping data received during execution of the one or more movement commands by the payload or the movable object.
15. The system of clause 1, wherein the instructions, when executed, further cause the mobile device to:
receive an input via the display coupled to the mobile device, the input to switch view modes between a camera and point cloud side-by-side view, a point cloud view, or a camera view.
16. The system of clause 1, wherein the instructions, when executed, further cause the mobile device to:
receive a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data; and
render a manipulated view of the mapping data based on the gesture-based input, the manipulated view including one of a translation, rotation, tilt, or zoom relative to the current view of the mapping data.
17. The system of clause 1, wherein the real-time visualization is generated within 100 milliseconds of storage of the mapping data in the one or more dynamic buffers.
18. The system of clause 1, wherein the instructions, when executed, further cause the mobile device to:
store a copy of the mapping data to disk as a file on the mobile device.
19. The system of clause 18, wherein the copy of the mapping data is used to recover lost mapping data in memory or to generate a playback visualization of the mapping data.
20. The system of clause 1, wherein the movable object is an unmanned aerial vehicle (UAV) .
21. A method comprising:
obtaining mapping data from a scanning sensor;
storing the mapping data in one or more dynamic buffers in a memory of a mobile device, the dynamic buffers comprising non-contiguous portions of the memory;
generating a real-time visualization of the mapping data stored in the dynamic buffers; and
rendering the real-time visualization of the mapping data on a display coupled to the mobile device.
22. The method of clause 21, wherein the scanning sensor includes a light detection and ranging (LiDAR) sensor.
23. The method of clause 22, wherein the mapping data includes point cloud data of a plurality of points collected by the LiDAR sensor and corresponding color data.
24. The method of clause 23, wherein the dynamic buffers include color buffers to store the corresponding color data and point buffers to store the point cloud data of the plurality of points.
25. The method of clause 24, wherein the point cloud data of the plurality of points are georeferenced using an inertial navigation system.
26. The method of clause 23, wherein the real-time visualization is a colorized point cloud based on the color data and the point cloud data.
27. The method of clause 26, wherein the color data is obtained from at least one visible light camera from the one or more cameras.
28. The method of clause 26, wherein the color data represents intensity variation or height variation.
29. The method of clause 24 wherein the dynamic buffers are circular buffers comprising a plurality of buffer blocks, wherein the mapping data is written sequentially to empty buffer blocks indexed by a dynamic indexing system.
30. The method of clause 29, wherein when the dynamic buffers are filled, new point data overwrites a random sampling of old point data in the dynamic buffers.
31. The method of clause 29, further comprising:
storing point cloud data of points from the mapping data associated with intensity values that exceed a threshold in an overview buffer, wherein the overview buffer is of fixed size; and
rendering the points stored in the overview buffer before rending the mapping data stored in the dynamic buffers.
32. The method of clause 21, wherein rendering the real-time visualization of the mapping data on a display coupled to the mobile device further comprises:
rendering a synchronized side-by-side view of image data obtained from at least one of camera and the real-time visualization of the mapping data.
33. The method of clause 21 further comprising:
receiving a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data;
converting the gesture-based input into one or more movement commands for at least one of a payload, a movable object, or an adapter apparatus coupling the payload to the movable object; and
sending the one or more movement commands to the payload, the movable object, or the adapter apparatus.
34. The method of clause 33, further comprising:
rendering an updated real-time visualization of updated mapping data on the display coupled to the mobile device, the updated mapping data received during execution of the one or more movement commands by the payload or the movable object.
35. The method of clause 21, further comprising:
receiving an input via the display coupled to the mobile device, the input to switch view modes between a camera and point cloud side-by-side view, a point cloud view, or a camera view.
36. The method of clause 21, further comprising:
receiving a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data; and
rendering a manipulated view of the mapping data based on the gesture-based input, the manipulated view including one of a translation, rotation, tilt, or zoom relative to the current view of the mapping data.
37. The method of clause 21, wherein the real-time visualization is generated within 100 milliseconds of storage of the mapping data in the one or more dynamic buffers.
38. The method of clause 21, further comprising:
storing a copy of the mapping data to disk as a file on the mobile device.
39. The method of clause 38, wherein the copy of the mapping data is used to recover lost mapping data in memory or to generate a playback visualization of the mapping data.
40. A non-transitory computer readable storage medium including instructions stored thereon which, when executed by at least one processor, cause the at least one processor to:
obtain mapping data from a scanning sensor;
store the mapping data in one or more dynamic buffers in a memory of a mobile device, the dynamic buffers comprising non-contiguous portions of the memory;
generate a real-time visualization of the mapping data stored in the dynamic buffers; and
render the real-time visualization of the mapping data on a display coupled to the mobile device.
42. The non-transitory computer readable storage medium of clause 41, wherein the scanning sensor includes a light detection and ranging (LiDAR) sensor.
43. The non-transitory computer readable storage medium of clause 42, wherein the mapping data includes point cloud data of a plurality of points collected by the LiDAR sensor and corresponding color data.
44. The non-transitory computer readable storage medium of clause 43, wherein the dynamic buffers include color buffers to store the corresponding color data and point buffers to store the point cloud data of the plurality of points.
45. The non-transitory computer readable storage medium of clause 44, wherein the point cloud data of the plurality of points are georeferenced using an inertial navigation system.
46. The non-transitory computer readable storage medium of clause 43, wherein the real-time visualization is a colorized point cloud based on the color data and the point cloud data.
47. The non-transitory computer readable storage medium of clause 46, wherein the color data is obtained from at least one visible light camera from the one or more cameras.
48. The non-transitory computer readable storage medium of clause 46, wherein the color data represents intensity variation or height variation.
49. The non-transitory computer readable storage medium of clause 44 wherein the dynamic buffers are circular buffers comprising a plurality of buffer blocks, wherein the mapping data is written sequentially to empty buffer blocks indexed by a dynamic indexing system.
50. The non-transitory computer readable storage medium of clause 49, wherein when the dynamic buffers are filled, new point data overwrites a random sampling of old point data in the dynamic buffers.
51. The non-transitory computer readable storage medium of clause 49, wherein the instructions, when executed, further cause the at least one processor to:
store point cloud data of points from the mapping data associated with intensity values that exceed a threshold in an overview buffer, wherein the overview buffer is of fixed size; and
render the points stored in the overview buffer before rending the mapping data stored in the dynamic buffers.
52. The non-transitory computer readable storage medium of clause 41, wherein to render the real-time visualization of the mapping data on a display coupled to the mobile device, the instructions, when executed, further cause the at least one processor to further comprises:
render a synchronized side-by-side view of image data obtained from at least one camera and the real-time visualization of the mapping data.
53. The non-transitory computer readable storage medium of clause 41, wherein the instructions, when executed, further cause the at least one processor to:
receive a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data;
convert the gesture-based input into one or more movement commands for at least one of a payload, a movable object, or an adapter apparatus coupling the payload to the movable object; and
send the one or more movement commands to the payload, the movable object, or the adapter apparatus.
54. The non-transitory computer readable storage medium of clause 53, wherein the instructions, when executed, further cause the at least one processor to:
render an updated real-time visualization of updated mapping data on the display coupled to the mobile device, the updated mapping data received during execution of the one or more movement commands by the payload or the movable object.
55. The non-transitory computer readable storage medium of clause 41, wherein the instructions, when executed, further cause the at least one processor to:
receive an input via the display coupled to the mobile device, the input to switch view modes between a camera and point cloud side-by-side view, a point cloud view, or a camera view.
56. The non-transitory computer readable storage medium of clause 41, wherein the instructions, when executed, further cause the at least one processor to:
receive a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data; and
render a manipulated view of the mapping data based on the gesture-based input, the manipulated view including one of a translation, rotation, tilt, or zoom relative to the current view of the mapping data.
57. The non-transitory computer readable storage medium of clause 41, wherein the real-time visualization is generated within 100 milliseconds of storage of the mapping data in the one or more dynamic buffers.
58. The non-transitory computer readable storage medium of clause 41, further comprising:
storing a copy of the mapping data to disk as a file on the mobile device.
59. The non-transitory computer readable storage medium of clause 58, wherein the copy of the mapping data is used to recover lost mapping data in memory or to generate a playback visualization of the mapping data.
60. A mobile device, comprising:
a touchscreen display;
at least one processor; and
a memory, the memory including instructions which, when executed by the at least one processor, cause the mobile device to:
obtain mapping data from a scanning sensor;
store the mapping data in one or more dynamic buffers in the memory, the dynamic buffers comprising non-contiguous portions of the memory;
generate a real-time visualization of the mapping data stored in the dynamic buffers; and
render the real-time visualization of the mapping data on a display coupled to the mobile device.
In the various embodiments described above, unless specifically noted otherwise, disjunctive language such as the phrase “at least one of A, B, or C, ” is intended to be understood to mean either A, B, or C, or any combination thereof (e.g., A, B, and/or C) . As such, disjunctive language is not intended to, nor should it be understood to, imply that a given embodiment requires at least one of A, at least one of B, or at least one of C to each be present.

Claims (59)

  1. A system, comprising:
    a movable object;
    a payload coupled to the movable object, the payload comprising a scanning sensor, one or more cameras, and an inertial navigation system (INS) ; and
    a mobile device in communication with the payload, the mobile device including at least one processor and a memory, the memory including instructions which, when executed by the at least one processor, cause the mobile device to:
    obtain mapping data from the scanning sensor;
    store the mapping data in one or more dynamic buffers in the memory of the mobile device, the dynamic buffers comprising non-contiguous portions of the memory;
    generate a real-time visualization of the mapping data stored in the dynamic buffers; and
    render the real-time visualization of the mapping data on a display coupled to the mobile device.
  2. The system of claim 1, wherein the scanning sensor includes a light detection and ranging (LiDAR) sensor.
  3. The system of claim 2, wherein the mapping data includes point cloud data of a plurality of points collected by the LiDAR sensor and corresponding color data.
  4. The system of claim 3, wherein the dynamic buffers include color buffers to store the corresponding color data and point buffers to store the point cloud data of the plurality of points.
  5. The system of claim 4, wherein the point cloud data of the plurality of points are georeferenced using the INS.
  6. The system of claim 3, wherein the real-time visualization is a colorized point cloud based on the color data and the point cloud data.
  7. The system of claim 6, wherein the color data is obtained from at least one visible light camera from the one or more cameras.
  8. The system of claim 6, wherein the color data represents intensity variation or height variation.
  9. The system of claim 4 wherein the dynamic buffers are circular buffers comprising a plurality of buffer blocks, wherein the mapping data is written sequentially to empty buffer blocks indexed by a dynamic indexing system.
  10. The system of claim 9, wherein when the dynamic buffers are filled, new point data overwrites a random sampling of old point data in the dynamic buffers.
  11. The system of claim 9, wherein the instructions, when executed, further cause the mobile device to:
    store point cloud data of points from the mapping data associated with intensity values that exceed a threshold in an overview buffer, wherein the overview buffer is of fixed size; and
    render the points stored in the overview buffer before rending the mapping data stored in the dynamic buffers.
  12. The system of claim 1, wherein to render the real-time visualization of the mapping data on a display coupled to the mobile device, the instructions, when executed, further cause the mobile device to:
    render a synchronized side-by-side view of image data obtained from at least one of the one or more cameras and the real-time visualization of the mapping data.
  13. The system of claim 1, wherein the instructions, when executed, further cause the mobile device to:
    receive a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data;
    convert the gesture-based input into one or more movement commands for at least one of the payload, the movable object, or an adapter apparatus coupling the payload to the movable object; and
    send the one or more movement commands to the payload, the movable object, or the adapter apparatus.
  14. The system of claim 13, wherein the instructions, when executed, further cause the mobile device to:
    render an updated real-time visualization of updated mapping data on the display coupled to the mobile device, the updated mapping data received during execution of the one or more movement commands by the payload or the movable object.
  15. The system of claim 1, wherein the instructions, when executed, further cause the mobile device to:
    receive an input via the display coupled to the mobile device, the input to switch view modes between a camera and point cloud side-by-side view, a point cloud view, or a camera view.
  16. The system of claim 1, wherein the instructions, when executed, further cause the mobile device to:
    receive a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data; and
    render a manipulated view of the mapping data based on the gesture-based input, the manipulated view including one of a translation, rotation, tilt, or zoom relative to the current view of the mapping data.
  17. The system of claim 1, wherein the real-time visualization is generated within 100 milliseconds of storage of the mapping data in the one or more dynamic buffers.
  18. The system of claim 1, wherein the instructions, when executed, further cause the mobile device to:
    store a copy of the mapping data to disk as a file on the mobile device.
  19. The system of claim 18, wherein the copy of the mapping data is used to recover lost mapping data in memory or to generate a playback visualization of the mapping data.
  20. The system of claim 1, wherein the movable object is an unmanned aerial vehicle (UAV) .
  21. A method, comprising:
    obtaining mapping data from a scanning sensor;
    storing the mapping data in one or more dynamic buffers in a memory of a mobile device, the dynamic buffers comprising non-contiguous portions of the memory;
    generating a real-time visualization of the mapping data stored in the dynamic buffers; and
    rendering the real-time visualization of the mapping data on a display coupled to the mobile device.
  22. The method of claim 21, wherein the scanning sensor includes a light detection and ranging (LiDAR) sensor.
  23. The method of claim 22, wherein the mapping data includes point cloud data of a plurality of points collected by the LiDAR sensor and corresponding color data.
  24. The method of claim 23, wherein the dynamic buffers include color buffers to store the corresponding color data and point buffers to store the point cloud data of the plurality of points.
  25. The method of claim 24, wherein the point cloud data of the plurality of points are georeferenced using an inertial navigation system.
  26. The method of claim 23, wherein the real-time visualization is a colorized point cloud based on the color data and the point cloud data.
  27. The method of claim 26, wherein the color data is obtained from at least one visible light camera from the one or more cameras.
  28. The method of claim 26, wherein the color data represents intensity variation or height variation.
  29. The method of claim 24 wherein the dynamic buffers are circular buffers comprising a plurality of buffer blocks, wherein the mapping data is written sequentially to empty buffer blocks indexed by a dynamic indexing system.
  30. The method of claim 29, wherein when the dynamic buffers are filled, new point data overwrites a random sampling of old point data in the dynamic buffers.
  31. The method of claim 29, further comprising:
    storing point cloud data of points from the mapping data associated with intensity values that exceed a threshold in an overview buffer, wherein the overview buffer is of fixed size; and
    rendering the points stored in the overview buffer before rending the mapping data stored in the dynamic buffers.
  32. The method of claim 21, wherein rendering the real-time visualization of the mapping data on a display coupled to the mobile device further comprises:
    rendering a synchronized side-by-side view of image data obtained from at least one of camera and the real-time visualization of the mapping data.
  33. The method of claim 21 further comprising:
    receiving a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data;
    converting the gesture-based input into one or more movement commands for at least one of a payload, a movable object, or an adapter apparatus coupling the payload to the movable object; and
    sending the one or more movement commands to the payload, the movable object, or the adapter apparatus.
  34. The method of claim 33, further comprising:
    rendering an updated real-time visualization of updated mapping data on the display coupled to the mobile device, the updated mapping data received during execution of the one or more movement commands by the payload or the movable object.
  35. The method of claim 21, further comprising:
    receiving an input via the display coupled to the mobile device, the input to switch view modes between a camera and point cloud side-by-side view, a point cloud view, or a camera view.
  36. The method of claim 21, further comprising:
    receiving a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data; and
    rendering a manipulated view of the mapping data based on the gesture-based input, the manipulated view including one of a translation, rotation, tilt, or zoom relative to the current view of the mapping data.
  37. The method of claim 21, wherein the real-time visualization is generated within 100 milliseconds of storage of the mapping data in the one or more dynamic buffers.
  38. The method of claim 21, further comprising:
    storing a copy of the mapping data to disk as a file on the mobile device.
  39. The method of claim 38, wherein the copy of the mapping data is used to recover lost mapping data in memory or to generate a playback visualization of the mapping data.
  40. A non-transitory computer readable storage medium including instructions stored thereon which, when executed by at least one processor, cause the at least one processor to:
    obtain mapping data from a scanning sensor;
    store the mapping data in one or more dynamic buffers in a memory of a mobile device, the dynamic buffers comprising non-contiguous portions of the memory;
    generate a real-time visualization of the mapping data stored in the dynamic buffers; and
    render the real-time visualization of the mapping data on a display coupled to the mobile device.
  41. The non-transitory computer readable storage medium of claim 41, wherein the scanning sensor includes a light detection and ranging (LiDAR) sensor.
  42. The non-transitory computer readable storage medium of claim 42, wherein the mapping data includes point cloud data of a plurality of points collected by the LiDAR sensor and corresponding color data.
  43. The non-transitory computer readable storage medium of claim 43, wherein the dynamic buffers include color buffers to store the corresponding color data and point buffers to store the point cloud data of the plurality of points.
  44. The non-transitory computer readable storage medium of claim 44, wherein the point cloud data of the plurality of points are georeferenced using an inertial navigation system.
  45. The non-transitory computer readable storage medium of claim 43, wherein the real-time visualization is a colorized point cloud based on the color data and the point cloud data.
  46. The non-transitory computer readable storage medium of claim 46, wherein the color data is obtained from at least one visible light camera from the one or more cameras.
  47. The non-transitory computer readable storage medium of claim 46, wherein the color data represents intensity variation or height variation.
  48. The non-transitory computer readable storage medium of claim 44 wherein the dynamic buffers are circular buffers comprising a plurality of buffer blocks, wherein the mapping data is written sequentially to empty buffer blocks indexed by a dynamic indexing system.
  49. The non-transitory computer readable storage medium of claim 49, wherein when the dynamic buffers are filled, new point data overwrites a random sampling of old point data in the dynamic buffers.
  50. The non-transitory computer readable storage medium of claim 49, wherein the instructions, when executed, further cause the at least one processor to:
    store point cloud data of points from the mapping data associated with intensity values that exceed a threshold in an overview buffer, wherein the overview buffer is of fixed size; and
    render the points stored in the overview buffer before rending the mapping data stored in the dynamic buffers.
  51. The non-transitory computer readable storage medium of claim 41, wherein to render the real-time visualization of the mapping data on a display coupled to the mobile device, the instructions, when executed, further cause the at least one processor to further comprises:
    render a synchronized side-by-side view of image data obtained from at least one camera and the real-time visualization of the mapping data.
  52. The non-transitory computer readable storage medium of claim 41, wherein the instructions, when executed, further cause the at least one processor to:
    receive a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data;
    convert the gesture-based input into one or more movement commands for at least one of a payload, a movable object, or an adapter apparatus coupling the payload to the movable object; and
    send the one or more movement commands to the payload, the movable object, or the adapter apparatus.
  53. The non-transitory computer readable storage medium of claim 53, wherein the instructions, when executed, further cause the at least one processor to:
    render an updated real-time visualization of updated mapping data on the display coupled to the mobile device, the updated mapping data received during execution of the one or more movement commands by the payload or the movable object.
  54. The non-transitory computer readable storage medium of claim 41, wherein the instructions, when executed, further cause the at least one processor to:
    receive an input via the display coupled to the mobile device, the input to switch view modes between a camera and point cloud side-by-side view, a point cloud view, or a camera view.
  55. The non-transitory computer readable storage medium of claim 41, wherein the instructions, when executed, further cause the at least one processor to:
    receive a gesture-based input via the display coupled to the mobile device, the gesture-based input to manipulate a current view of the mapping data; and
    render a manipulated view of the mapping data based on the gesture-based input, the manipulated view including one of a translation, rotation, tilt, or zoom relative to the current view of the mapping data.
  56. The non-transitory computer readable storage medium of claim 41, wherein the real-time visualization is generated within 100 milliseconds of storage of the mapping data in the one or more dynamic buffers.
  57. The non-transitory computer readable storage medium of claim 41, further comprising:
    storing a copy of the mapping data to disk as a file on the mobile device.
  58. The non-transitory computer readable storage medium of claim 58, wherein the copy of the mapping data is used to recover lost mapping data in memory or to generate a playback visualization of the mapping data.
  59. A mobile device, comprising:
    a touchscreen display;
    at least one processor; and
    a memory, the memory including instructions which, when executed by the at least one processor, cause the mobile device to:
    obtain mapping data from a scanning sensor;
    store the mapping data in one or more dynamic buffers in the memory, the dynamic buffers comprising non-contiguous portions of the memory;
    generate a real-time visualization of the mapping data stored in the dynamic buffers; and
    render the real-time visualization of the mapping data on a display coupled to the mobile device.
PCT/CN2021/077854 2020-10-12 2021-02-25 Large scope point cloud data generation and optimization WO2022077829A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202017068646A 2020-10-12 2020-10-12
US17/068,646 2020-10-12

Publications (1)

Publication Number Publication Date
WO2022077829A1 true WO2022077829A1 (en) 2022-04-21

Family

ID=81207648

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/077854 WO2022077829A1 (en) 2020-10-12 2021-02-25 Large scope point cloud data generation and optimization

Country Status (1)

Country Link
WO (1) WO2022077829A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635889A (en) * 2024-01-26 2024-03-01 南京柠瑛智能科技有限公司 Real-time rendering method, system and device for laser point cloud data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109885083A (en) * 2019-03-06 2019-06-14 国网陕西省电力公司检修公司 Transmission line of electricity fining inspection flying platform and method for inspecting based on laser radar
CN209280927U (en) * 2018-09-30 2019-08-20 广州地理研究所 A kind of laser radar UAV system
CN110192122A (en) * 2017-01-24 2019-08-30 深圳市大疆创新科技有限公司 Radar-directed system and method on unmanned moveable platform
US20200132822A1 (en) * 2018-10-29 2020-04-30 Dji Technology, Inc. User interface for displaying point clouds generated by a lidar device on a uav

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110192122A (en) * 2017-01-24 2019-08-30 深圳市大疆创新科技有限公司 Radar-directed system and method on unmanned moveable platform
CN209280927U (en) * 2018-09-30 2019-08-20 广州地理研究所 A kind of laser radar UAV system
US20200132822A1 (en) * 2018-10-29 2020-04-30 Dji Technology, Inc. User interface for displaying point clouds generated by a lidar device on a uav
CN109885083A (en) * 2019-03-06 2019-06-14 国网陕西省电力公司检修公司 Transmission line of electricity fining inspection flying platform and method for inspecting based on laser radar

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635889A (en) * 2024-01-26 2024-03-01 南京柠瑛智能科技有限公司 Real-time rendering method, system and device for laser point cloud data
CN117635889B (en) * 2024-01-26 2024-04-23 南京柠瑛智能科技有限公司 Real-time rendering method, system and device for laser point cloud data

Similar Documents

Publication Publication Date Title
US11698449B2 (en) User interface for displaying point clouds generated by a LiDAR device on a UAV
US11721225B2 (en) Techniques for sharing mapping data between an unmanned aerial vehicle and a ground vehicle
US10895968B2 (en) Graphical user interface customization in a movable object environment
US11927953B2 (en) Customizable waypoint missions
US11709073B2 (en) Techniques for collaborative map construction between an unmanned aerial vehicle and a ground vehicle
WO2020088414A1 (en) A movable object performing real-time mapping using a payload assembly
US20210404840A1 (en) Techniques for mapping using a compact payload in a movable object environment
WO2020103023A1 (en) Surveying and mapping system, surveying and mapping method, apparatus, device and medium
US20230177707A1 (en) Post-processing of mapping data for improved accuracy and noise-reduction
WO2022077829A1 (en) Large scope point cloud data generation and optimization
US20220113423A1 (en) Representation data generation of three-dimensional mapping data
US20220113421A1 (en) Online point cloud processing of lidar and camera data
KR20210106422A (en) Job control system, job control method, device and instrument
US20200106958A1 (en) Method and system for operating a movable platform using ray-casting mapping
WO2022113482A1 (en) Information processing device, method, and program
US20240013460A1 (en) Information processing apparatus, information processing method, program, and information processing system
WO2023032292A1 (en) Information processing method, information processing program, and information processing device
JP2023083072A (en) Method, system and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21878886

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21878886

Country of ref document: EP

Kind code of ref document: A1