CN113748314A - Interactive three-dimensional point cloud matching - Google Patents

Interactive three-dimensional point cloud matching Download PDF

Info

Publication number
CN113748314A
CN113748314A CN201880100676.4A CN201880100676A CN113748314A CN 113748314 A CN113748314 A CN 113748314A CN 201880100676 A CN201880100676 A CN 201880100676A CN 113748314 A CN113748314 A CN 113748314A
Authority
CN
China
Prior art keywords
point cloud
cloud data
user
user interface
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880100676.4A
Other languages
Chinese (zh)
Other versions
CN113748314B (en
Inventor
张妍
侯庭波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Publication of CN113748314A publication Critical patent/CN113748314A/en
Application granted granted Critical
Publication of CN113748314B publication Critical patent/CN113748314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/10Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/28Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics information; characterised by the purpose of the output information, e.g. for attracting the attention of the driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/85Arrangements for transferring vehicle- or driver-related data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/166Navigation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/592Data transfer involving external databases
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • General Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Mechanical Engineering (AREA)
  • Software Systems (AREA)
  • Transportation (AREA)
  • Combustion & Propulsion (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Computer Hardware Design (AREA)
  • Architecture (AREA)
  • Navigation (AREA)

Abstract

Systems and methods are disclosed that relate to generating an interactive user interface that enables a user to move, rotate, or otherwise edit three-dimensional point cloud data in a virtual three-dimensional space to align or match point clouds captured by light detection and ranging scans prior to generating a high-resolution map. The system may obtain point cloud data for two or more point clouds, render the point clouds for display in a user interface, and then receive a user selection of one of the point clouds and a command from the user to move and/or rotate the selected point cloud. The system may adjust the display position of the selected point cloud relative to other simultaneously displayed point clouds in real time in response to user commands and store the adjusted point cloud position data for use in generating a new high resolution map.

Description

Interactive three-dimensional point cloud matching
Incorporation by reference of priority applications
Any and all applications, if any, for which a foreign or domestic priority claim is identified in the application data sheet of the present application are herein incorporated by reference in their entirety in accordance with 37CFR 1.57.
Copyright notice
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the U.S. patent and trademark office files and/or records, but otherwise reserves all copyright rights whatsoever.
Background
Vehicles, such as vehicles for shared travel purposes, vehicles providing driver assistance functionality, and/or automated or autonomously driven vehicles (AV), may use an onboard data processing system to acquire and process sensor data to perform a wide variety of functions. For example, functions may include determining and/or displaying navigation routes, identifying road signs, detecting objects and/or road obstacles, controlling vehicle operation, and so forth. Providing accurate and precise high resolution maps for autonomous vehicles is one of the most basic and important prerequisites to be able to achieve full autonomous driving. For safety reasons, maps that need to be accessed by autonomous vehicles contain much more detailed information and true ground-absolute accuracy (true-ground-absolute-accuracy) than is seen in typical existing map resources that are not designed for autonomous driving purposes.
Drawings
FIG. 1A illustrates a block diagram of a networked vehicle environment in which one or more vehicles and/or one or more user devices interact with a server, according to one embodiment.
FIG. 1B illustrates a block diagram showing the vehicle of FIG. 1A communicating with one or more other vehicles and/or servers of FIG. 1A, according to one embodiment.
FIG. 2 illustrates a block diagram showing the server of FIGS. 1A and 1B in communication with a map editor device, according to one embodiment.
FIG. 3 is an illustrative user interface of a reduced view including a three-dimensional point cloud rendering and a two-dimensional map projection including graphical indicators representing different light detection and ranging scan areas.
FIG. 4 is an illustrative user interface including a magnified view of a three-dimensional point cloud rendering and a two-dimensional map projection, including superimposed graphical indicators of nodes and connections within a pose graph associated with the point cloud data.
FIG. 5 is an illustrative user interface including a three-dimensional point cloud rendering and a two-dimensional map projection, where two user-selected nodes have been removed.
FIG. 6 is an illustrative user interface of a reduced view including a three-dimensional point cloud rendering and a two-dimensional map projection, where displayed pose graphical data has been altered based on user interaction with the user interface.
FIG. 7 is a flow chart of an illustrative method for providing user interface functionality that enables a user to view and edit point cloud and pose graphical data for generating a high resolution map.
FIG. 8 is an illustrative user interface including a magnified view of a three-dimensional point cloud rendering and a two-dimensional map projection, including a display of distance measurements between user-selected points.
FIG. 9 is an illustrative user interface that includes a three-dimensional point cloud rendering of two point clouds and enables a user to visually realign or match points in the respective point clouds.
FIG. 10 is a flow diagram of an illustrative method for enabling a user to visually edit the positioning of one or more point clouds for generating a high resolution map.
Detailed Description
Constructing a large high definition map (HD map), such as a high definition map of a whole city, is a relatively new technical field. One of the challenges is that a large amount of captured data must be processed and surveyed (usually programmatically) through a multi-part mapping pipeline. Despite the final output of three-dimensional dense point clouds and two-dimensional map images, intermediate results, such as light detection and ranging (LIDAR) scans and corresponding pose graphs, also exist in typical high-resolution map construction processes. Existing methods of constructing high resolution maps generally lack efficient tools for enabling a user of a computing system to visually survey data in a particular region of captured light detection and ranging scan data, visualize intermediate results and final results, and interactively modify the intermediate data through a graphical user interface to improve the quality of the final high resolution map data. Aspects of the present disclosure include a variety of user interface tools and associated computer functions that enable integrated visual reconnaissance and editing for two-dimensional and three-dimensional visualization of captured light detection and ranging data, pose graphics, map data, in order to build more accurate high resolution maps.
Detailed descriptions and examples of systems and methods according to one or more illustrative embodiments of the present disclosure may be found in the system and method section entitledImproved high resolution mapping features and associated interfacesSection (a) and title ofExample embodimentsAnd figures 2-10 herein. Still further, the components and functionality of the interactive user interface and associated high-resolution mapping features may be configured and/or incorporated into the networked vehicle environment 100 described herein in fig. 1A and 1B.
The various embodiments described herein are closely related to, enabled by, and exist dependent on vehicle and/or computer technology. For example, as described herein with reference to various embodiments, generating interactive graphical user interfaces for displaying and implementing associated computer functionality for manipulating data points of potentially millions of points in a three-dimensional virtual space cannot be reasonably performed by humans alone without the vehicle and/or computer technology upon which these interactive user interfaces are implemented.
Networked vehicle environment
FIG. 1A illustrates a block diagram of a networked vehicle environment 100 in which one or more vehicles 120 and/or one or more user devices 102 interact with a server 130 via a network 110, according to one embodiment. For example, the vehicle 120 may be equipped to provide shared egress and/or other location-based services to help the driver control vehicle operation (e.g., through various driver assistance features such as adaptive and/or conventional cruise control, adaptive headlamp control, anti-lock braking, auto parking, night vision, blind spot monitoring, collision avoidance, crosswind stabilization, driver fatigue detection, driver monitoring systems, emergency driver assistance, intersection assistance, hill descent control, smart speed adaptation, lane centering, lane departure warning, forward, rear, and/or side parking sensors, pedestrian detection, rain sensors, look-around systems, tire pressure monitors, traffic sign recognition, steering assistance, reverse driving warnings, traffic condition alerts, etc.) and/or to fully control vehicle operation. Thus, the vehicle 120 may be a conventional gasoline, natural gas, biofuel, electric, hydrogen, etc., vehicle configured to provide shared egress and/or other location-based services, provide driver assistance functionality (e.g., one or more of the driver assistance features described herein), or an automated or autonomously driven vehicle (AV). Vehicle 120 may be an automobile, truck, van, bus, motorcycle, scooter, bicycle, and/or any other motorized vehicle.
Server 130 may communicate with vehicle 120 to obtain vehicle data, such as route data, sensor data, perception data, vehicle 120 control data, vehicle 120 component failure and/or fault data, and so forth. Server 130 may process and store these vehicle data for use in other operations performed by server 130 and/or another computing system (not shown). Such operations may include running a diagnostic model for identifying vehicle 120 operational issues (e.g., causes of vehicle 120 navigation errors, abnormal sensor readings, unidentified objects, vehicle 120 component failures, etc.); running a model that simulates the performance of the vehicle 120 given a set of variables; identify objects that the vehicle 120 cannot identify, generate control instructions that, when executed by the vehicle 120, cause the vehicle 120 to drive and/or maneuver in some manner along the specified path; and/or the like.
Server 130 may also transmit data to vehicle 120. For example, server 130 may transmit map data, firmware and/or software updates, vehicle 120 control instructions, identification of objects that are not otherwise recognized by vehicle 120, passenger access information, traffic data, and/or the like.
In addition to communicating with one or more vehicles 120, server 130 may also be capable of communicating with one or more user devices 102. In particular, server 130 may provide web services to enable users to request location-based services (e.g., a shipping service, such as a shared travel service) through an application running on user device 102. For example, the user device 102 may correspond to a computing device, such as a smart phone, tablet, laptop, smart watch, or any other device that communicates with the server 130 over the network 110. In this embodiment, the user device 102 executes an application, such as a mobile application, that the user operating the user device 102 may use to interact with the server 130. For example, the user device 102 may communicate with the server 130 to provide location data and/or queries to the server 130, receive map-related data and/or directions from the server 130, and/or the like.
Server 130 may process the request and/or other data received from user device 102 to identify a service provider (e.g., a driver of vehicle 120) to provide the requested service to the user. Further, server 130 may receive data, such as user trip access or destination data, user location query data, and the like, based on which server 130 identifies areas, addresses, and/or other locations associated with various users. The server 130 may then use the identified location to provide directions to the service provider and/or user to the determined access location.
The application running on user device 102 may be created and/or made available by the same entity responsible for server 130. Alternatively, the application running on the user device 102 may be a third party application (e.g., an application programming interface or software development kit) that contains features that enable communication with the server 130.
For simplicity and ease of explanation, a single server 130 is illustrated in FIG. 1A. However, it should be understood that server 130 may be a single computing device, or may include multiple different computing devices logically or physically combined together to collectively operate as a server system. The components of the server 130 may be implemented in dedicated hardware (e.g., a server computing device with one or more ASICs) so as to not require software, or as a combination of hardware and software. Further, the modules and components of server 130 may be combined on one server computing device, or each separately or in groups disposed on multiple server computing devices. In some embodiments, server 130 may include more or fewer components than shown in FIG. 1A.
The network 110 includes any wired network, wireless network, or combination thereof. For example, the network 110 may be a personal area network, a local area network, a wide area network, an over-the-air broadcast network (e.g., for broadcast or television), a cable network, a satellite network, a cellular telephone network, or a combination thereof. As yet another example, the network 110 may be a publicly accessible network linking networks, possibly operated by various different organizations, such as the internet. In some embodiments, the network 110 may be a private or semi-private network, such as a corporate or university intranet. Network 110 may include one or more wireless networks such as a global system for mobile communications (GSM) network, a Code Division Multiple Access (CDMA) network, a Long Term Evolution (LTE) network, or any other type of wireless network. The network 110 may use protocols and components to communicate over the internet or any other type of network described above. For example, the protocols used by the network 110 may include hypertext transfer protocol (HTTP), hypertext transfer security protocol (HTTPs), Message Queue Telemetry Transport (MQTT), restricted application protocol (CoAP), and the like. Protocols and components for communicating via the internet or any other above mentioned type of communication network are well known to those skilled in the art and will therefore not be described in further detail herein.
The server 130 may include a navigation unit 140, a vehicle data processing unit 145, and a data store 150. The navigation unit 140 may assist in location-based services. For example, the navigation unit 140 may assist a user (also referred to herein as a "driver") in transporting another user (also referred to herein as a "lift") and/or object (e.g., food, packages, etc.) from a first location (also referred to herein as an "pickup location") to a second location (also referred to herein as a "destination location"). The navigation unit 140 may assist in enabling user and/or object transport by providing maps and/or navigation instructions to applications running on the user device 102 of the lift, to applications running on the user device 102 of the driver, and/or to a navigation system running on the vehicle 120.
As an example, the navigation unit 140 may include a matching service (not shown) that pairs a lift requesting a trip from an pickup location to a destination location with a driver who is able to complete the trip. The matching service may interact with an application running on the lift's user device 102 and/or an application running on the driver's user device 102 to establish the lift's itinerary and/or to process payment from the lift to the driver.
The navigation unit 140 may also communicate with an application running on the driver's user device 102 during the trip to obtain trip location information from the user device 102 (e.g., via Global Positioning System (GPS) components coupled to and/or embedded in the user device 102) and provide navigation directions to the application that assist the driver in traveling from the current location to the destination location. The navigation unit 140 may also indicate a number of different geographical locations or points of interest to the driver, whether or not the driver is carrying a lift.
The vehicle data processing unit 145 may be configured to support driver assistance features of the vehicle 120 and/or to support autonomous driving. For example, the vehicle data processing unit 145 may generate and/or transmit map data to the vehicle 120, run a diagnostic model for identifying operational issues with the vehicle 120, run a model for simulating the performance of the vehicle 120 given a set of variables, identify objects and transmit an identification of the objects to the vehicle 120 using vehicle data provided by the vehicle 120, generate and/or transmit vehicle 120 control instructions to the vehicle 120, and/or type operations.
The data storage 150 may store various types of data used by the navigation unit 140, the vehicle data processing unit 145, the user device 102, and/or the vehicle 120. For example, the data store 150 may store user data 152, map data 154, search data 156, and log data 158.
The user data 152 may include information about some or all users registered with the location-based service, such as drivers and lift riders. The information may include, for example, a username, password, name, address, billing information, data associated with previous trips taken or taken by the user, user rating information, user loyalty program information, and/or the like.
Map data 154 may include high-resolution maps generated from sensors (e.g., light detection and ranging (LiDAR) sensors, radio detection and ranging (RADAR) sensors, infrared cameras, visible light cameras, stereo cameras, Inertial Measurement Units (IMU), etc.), satellite images, Optical Character Recognition (OCR) performed on captured street images (e.g., recognizing street names, recognizing street sign text, recognizing point of interest names, etc.), and so forth; information for calculating a route; information for rendering a two-dimensional and/or three-dimensional graphical map; and/or the like. For example, the map data 154 may include a number of elements: such as the layout of streets and intersections, bridges (e.g., including information about the height and/or width of a overpass), exit ramps, buildings, parking structure entrances and exits (e.g., including information about the height and/or width of a vehicle entrance and/or exit), the location of signboards and stop lights, emergency crossings, points of interest (e.g., parks, restaurants, gas stations, attractions, landmarks, etc., and associated names), road markings (e.g., centerline markings separating opposing lanes, lane markings, stop lines, left turn guide lines, right turn guide lines, pedestrian crossings, bus lane markings, bicycle lane markings, safety island markings, road surface text, highway exit and entrance markings, etc.), curb, railway lines, waterways, turn radii and/or angles for left and right turns, distances and sizes of road features, road length, and size of road features, road length, and/or distance to a road intersection, and/or to a road intersection, a road or a vehicle, The location of the partition between two-way traffic and/or the like, along with the geographic locations (e.g., geographic coordinates) associated with these elements. The map data 154 may also include reference data such as real-time and/or historical traffic information, current and/or predicted weather conditions, road work information, information regarding laws and regulations (e.g., speed limits, whether to allow or disallow a right turn at a red light, whether to allow or disallow a turn around, allowed travel directions, and/or the like), news events, and/or the like.
Although the map data 154 is shown as being stored in the data store 150 of the server 130, this is not meant to be limiting. For example, server 130 may transmit map data 154 to vehicle 120 for storage therein (e.g., in data store 129, as described below).
The search data 156 may include searches that were entered by a number of different users in the past. For example, the search data 156 may include a text search for access and/or destination locations. The search may be for a particular address, geographic location, name associated with the geographic location (e.g., name of park, restaurant, gas station, attraction, landmark, etc.), and so forth.
The log data 158 may include vehicle data provided by one or more vehicles 120. For example, the vehicle data may include route data, sensor data, perception data, vehicle 120 control data, vehicle 120 component failure and/or fault data, and the like.
FIG. 1B illustrates a block diagram showing the vehicle 120 of FIG. 1A communicating with one or more other vehicles 170A-N and/or the server 130 of FIG. 1A, according to one embodiment. As shown in fig. 1B, vehicle 120 may include various components and/or data storage. For example, the vehicle 120 may include a sensor array 121, a communication array 122, a data processing system 123, a communication system 124, an internal interface system 125, a vehicle control system 126, an operating system 127, a mapping engine 128, and/or a data store 129.
Communications 180 may be sent and/or received between vehicle 120, one or more vehicles 170A-N, and/or server 130. Server 130 may transmit and/or receive data from vehicle 120, as described above in connection with fig. 1A. For example, server 130 may transmit vehicle control instructions or commands to vehicle 120 (e.g., as communication 180). The vehicle control instructions may be received by a communication array 122 (e.g., an array of one or more antennas configured to transmit and/or receive wireless signals) operated by a communication system 124 (e.g., a transceiver). The communication system 124 may communicate vehicle control commands to a vehicle control system 126 that may operate the acceleration, steering, braking, lights, signals, and other operating systems 127 of the vehicle 120 to drive and/or steer the vehicle 120 and/or assist the driver in driving and/or steering the vehicle 120 through road traffic toward a destination location specified by the vehicle control commands.
As an example, the vehicle control instructions may include route data 163 that may be processed by the vehicle control system 126 to maneuver the vehicle 120 and/or assist a driver in maneuvering the vehicle 120 along a given route (e.g., an optimized route calculated by the server 130 and/or the mapping engine 128) toward a specified destination location. In processing the route data 163, the vehicle control system 126 may generate control commands 164 for execution by the operating system 127 (e.g., acceleration, steering, braking, steering, reversing, etc.) to cause the vehicle 120 to travel along the route to the destination location, and/or to assist the driver in steering the vehicle 120 along the route to the destination location.
Destination location 166 may be specified by server 130 based on a user request (e.g., an access request, a delivery request, etc.) transmitted by an application running on user device 102. Alternatively or additionally, the lift and/or driver of the vehicle 120 may provide the destination location 166 by providing user input 169 through the internal interface system 125 (e.g., a vehicle navigation system). In some embodiments, vehicle control system 126 may transmit the input destination location 166 and/or the current location of vehicle 120 (e.g., as a GPS data packet) as communication 180 to server 130 via communication system 124 and communication array 122. The server 130 (e.g., navigation unit 140) may perform an optimization operation using the current location of the vehicle 120 and/or the input destination location 166 to determine an optimal route for the vehicle 120 to travel to the destination location 166. Route data 163, including the optimal route, may be communicated from server 130 to vehicle control system 126 via communication array 122 and communication system 124. As a result of receiving the route data 163, the vehicle control system 126 can cause the operating system 127 to steer the vehicle 120 along the optimal route to the destination location 166 via traffic, assist the driver in steering the vehicle 120 along the optimal route to the destination location 166 via traffic, and/or cause the internal interface system 125 to display and/or present instructions for steering the vehicle 120 along the optimal route to the destination location 166 via traffic.
Alternatively or additionally, the route data 163 includes an optimal route, and the vehicle control system 126 automatically inputs the route data 163 into the mapping engine 128. The mapping engine 128 may generate map data 165 using the optimal route (e.g., generate a map to display the optimal route and/or take instructions for the optimal route) and provide the map data 165 to the internal interface system 125 (e.g., via the vehicle control system 126) for display. The map data 165 may include information derived from the map data 154 stored in the data store 150 on the server 130. The displayed map data 165 may indicate an estimated time of arrival and/or display the progress of the journey of the vehicle 120 along the optimal route. The displayed map data 165 may also include indicators such as diversion commands, emergency notifications, road work information, real-time traffic data, current weather conditions, information about laws and regulations (e.g., speed limits, whether or not to allow or prohibit a right turn at a red light, where to allow or prohibit a turn around, allowed directions of travel, etc.), news events, and/or the like.
User input 169 may also be a request to access a network (e.g., network 110). In response to such a request, the internal interface system 125 can generate an access request 168, which can be processed by the communication system 124 to configure the communication array 122 to transmit and/or receive data corresponding to user interaction with the internal interface system 125 and/or user device 102 interaction with the internal interface system 125 (e.g., user device 102 connected to the internal interface system 125 via a wireless connection). For example, the vehicle 120 may include an onboard Wi-Fi that passengers and/or drivers may access to send and/or receive email and/or text messages, audio streaming and/or video content, browse content pages (e.g., web pages, website pages, etc.), and/or access applications accessed using a network. Based on the user interaction, internal interface system 125 can receive content 167 via network 110, communication array 122, and/or communication system 124. Communication system 124 may dynamically manage network access to avoid or minimize disruption of the transmission of content 167.
The sensor array 121 may include any number of one or more types of sensors, such as a satellite radio navigation system (e.g., GPS), light detection and ranging sensors, landscape sensors (e.g., radio detection and ranging sensors), inertial measurement units, cameras (e.g., infrared cameras, visible light cameras, stereo cameras, etc.), Wi-Fi detection systems, cellular communication systems, inter-vehicle communication systems, road sensor communication systems, feature sensors, proximity sensors (e.g., infrared, electromagnetic, photoelectric, etc.), distance sensors, depth sensors, and/or the like. The satellite radio navigation system may calculate the current position of vehicle 120 (e.g., within a range of 1-10 meters) based on an analysis of signals received from a constellation of satellites.
Light detection and ranging sensors, radio detection and ranging, and/or any other similar type of sensor may be used to detect the surroundings of the vehicle 120 when the vehicle 120 is in motion or is about to begin moving. For example, light detection and ranging sensors may be used to reflect multiple laser beams from an approaching object to assess their distance and provide accurate three-dimensional information about the surrounding environment. Data obtained from the light detection and ranging sensors may be used to perform object identification, motion vector determination, collision prediction, and/or implement accident avoidance procedures. Alternatively, the light detection and ranging sensor may use a rotating scanning mirror assembly to provide a 360 ° viewing angle. The light detection and ranging sensors may optionally be mounted on the roof of the vehicle 120.
The inertial measurement unit may include an X, Y, Z-oriented gyroscope and/or accelerometer. The inertial measurement unit provides data about the rotational and linear motion of the vehicle 120, which can be used to calculate the motion and position of the vehicle 120.
The camera may be used to capture a visual image of the environment surrounding the vehicle 120. Depending on the configuration and number of cameras, the cameras may provide a 360 ° view around the vehicle 120. The images from the camera may be used to read road markings (e.g., lane markings), read street signs, detect objects, and/or the like.
A Wi-Fi detection system and/or a cellular communication system may be used to triangulate Wi-Fi hotspots or cell towers, respectively, to determine the location of the vehicle 120 (optionally in conjunction with a satellite radio navigation system).
The inter-vehicle communication system (which may include a Wi-Fi detection system, a cellular communication system, and/or the communication array 122) may be used to receive and/or transmit data to other vehicles 170A-N, such as current speed and/or position coordinates of the vehicle 120, time and/or position coordinates corresponding to planned deceleration and planned deceleration rates, time and/or position coordinates when a stop operation is planned, time and/or position coordinates when a lane change is planned and a direction of the lane change is planned, time and/or position coordinates when a turn operation is planned, time and/or position coordinates when a stop operation is planned, and/or the like.
A road sensor communication system (which may include a Wi-Fi detection system and/or a cellular communication system) may be used to read information from road sensors (e.g., indicating traffic speed and/or traffic congestion) and/or to read information from traffic control devices (e.g., traffic lights).
When a user requests pickup (e.g., through an application running on the user device 102), the user may specify a particular destination location. The initial position may be a current position of the vehicle 120, which may be determined using a satellite radio navigation system (e.g., GPS, Galileo, COMPASS, DORIS, GLONASS, and/or other satellite radio navigation systems) installed in the vehicle, a Wi-Fi positioning system, cellular tower triangulation, and/or the like. Alternatively, the initial position may be specified by the user through a user interface (e.g., internal interface system 125) provided by vehicle 120 or through user device 102 running the application. Alternatively, the initial location may be automatically determined from location information obtained from the user device 102. In addition to the initial location and the destination location, one or more waypoints may be specified, enabling multiple destination locations.
Raw sensor data 161 from sensor array 121 may be processed by an on-board data processing system 123. The processed data 162 may then be transmitted by the data processing system 123 to the vehicle control system 126 and optionally to the server 130 via the communication system 124 and the communication array 122.
Data store 129 may store map data (e.g., map data 154) and/or a subset of map data 154 (e.g., a portion of map data 154 corresponding to an approximate area in which vehicle 120 is currently located). In some embodiments, the vehicle 120 may record updated map data along the travel route using the sensor array 121 and transmit the updated map data to the server 130 via the communication system 124 and the communication array 122. The server 130 may then transmit the updated map data to one or more of the vehicles 170A-N and/or further process the updated map data.
The data processing system 123 may provide continuously or near continuously processed data 162 to the vehicle control system 126 in response to point-to-point activity in the environment surrounding the vehicle 120. The processed data 162 may include a comparison between raw sensor data 161, representing the operating environment of the vehicle 120 and continuously collected by the sensor array 121, and map data stored in the data store 129. In one example, the data processing system 123 is programmed with machine learning or other artificial intelligence capabilities to enable the vehicle 120 to identify and respond to conditions, events, and/or potential hazards. In a variation, the data processing system 123 may continuously or near continuously compare the raw sensor data 161 to stored map data to perform positioning to continuously or near continuously determine the position and/or orientation of the vehicle 120. The positioning of vehicle 120 may enable vehicle 120 to know the immediate location and/or direction of vehicle 120 as compared to stored map data to steer vehicle 120 through traffic on a block road and/or to assist a driver in steering vehicle 120 through traffic on a block road and to identify and respond to potentially dangerous (e.g., pedestrian) or local conditions, such as weather or traffic conditions.
Further still, positioning may enable vehicle 120 to tune or beam steer communication array 122 to maximize communication link quality and/or minimize interference from other communications of other vehicles 170A-N. For example, communication system 124 may beam steer the radiation pattern of communication array 122 in response to network configuration commands received from server 130. Data store 129 may store current network source map data that identifies network base stations and/or other network sources that provide network connectivity. The network source map data may indicate the location of the base stations and/or available network types (e.g., 3G, 4G, LTE, Wi-Fi, etc.) within the area in which the vehicle 120 is located.
Although fig. 1B describes certain operations as being performed by the vehicle 120 or the server 130, this is not meant to be limiting. The operations performed by the vehicle 120 and the server 130 as described herein may be performed by any entity. For example, certain operations typically performed by the server 130 (e.g., transmitting update map data to the vehicles 170A-N) may be performed by the vehicle 120 for load balancing purposes (e.g., reducing the processing load of the server 130, utilizing idle processing power on the vehicle 120, etc.).
Still further, any of the vehicles 170A-N may include some or all of the components of the vehicle 120 described herein. For example, vehicles 170A-N may include communication array 122 to communicate with vehicle 120 and/or server 130.
Improved high resolution map generation features and related interfaces
Certain methods disclosed herein relate to generating an interactive user interface that enables a user to alter three-dimensional point cloud data and/or associated pose graphical data generated from light detection and range scans prior to generating a high resolution map. The user may select among two-dimensional map representations with overlapping graphical node indicators to alter graphical connections, remove nodes, view corresponding three-dimensional point clouds, and otherwise edit intermediate results from light detection and ranging scans to improve the quality of high-resolution maps generated from data manipulated by the user. The enhanced high resolution map may be communicated to one or more vehicles, such as vehicle 120, to assist the driver in navigating, driving, and/or maneuvering vehicle 120 and/or for navigating, driving, and/or maneuvering vehicle 120 in an autonomous driving manner.
According to some embodiments of the present disclosure, a three-dimensional point cloud scan is collected from a light detection and ranging sensor located on the roof of a vehicle (e.g., an autonomous vehicle, a vehicle for location-based services, a vehicle providing driver assistance functionality, etc.) as the vehicle travels over a road. These light detection and ranging scans from different regions can then be passed to an automated pipeline of data processing, including filtering, combining, and matching of the various scans. A high resolution map may then be generated by projection of these point clouds. In addition to the three-dimensional point cloud and the two-dimensional map image, it is beneficial to have tools for visualizing the pose graph and associated light detection and range scan so that a supervising user assisting the mapping process can visually determine whether there is still inconsistency or inaccuracy after the various steps in the automatic mapping pipeline.
Aspects of the present disclosure include, for example, a user interface for viewing a high-resolution map at different levels, surveying a three-dimensional point cloud of a certain portion of the high-resolution map, measuring a distance between two points from the map or point cloud, and adjusting portions of the map to better align or match two or more point clouds. The user interfaces and associated functionality described herein may be used to improve the accuracy and efficiency of existing mapping methods.
As will be described further herein, aspects of the present disclosure include three relevant regions: map survey, map editing and map evaluation. When surveying a map in a user interface, a user may view a region of interest (ROI) in a two-dimensional map view and select a portion to view a corresponding three-dimensional point cloud in a separate pane or viewing area of the user interface. When evaluating and editing a map in a two-dimensional and/or three-dimensional view, a user can interactively make immediate changes to reduce or minimize unintended inaccuracies resulting from a previously completed automatic mapping process.
The map survey features described herein include loading one or more map graphics (which, in some embodiments, may be in the form of pose graphics) and presenting visual representations of nodes and edges in the graphics in a portion of a user interface that presents a two-dimensional map data view. Such views within the user interface enable a user to visually inspect the constructed pose graph, navigate between portions of the graph to explore the associated three-dimensional point cloud, and determine if any editing of the graph is required based on the visual inspection. The user interface described herein enables a user to move and zoom within a two-dimensional map view or a three-dimensional point cloud view. Graphics may be rendered on different forms of graphical indicators at different zoom levels depending on the zoom level. For example, at a zoom-out level, different sub-graphs may be extracted as large-area rectangles or polygons covering the map, while zoom-in may cause the user interface to be updated to display individual nodes and connections of the same sub-graph, as described further herein.
The map survey features described herein also include enabling a user to select one or more graph nodes for viewing their point clouds in a three-dimensional rendered view. Point clouds from different nodes may be rendered in different colors in the same view, enabling the user to visually determine the degree of alignment of adjacent point clouds and identify any inaccuracies. When viewing the point cloud, the user may select to move, rotate, and/or zoom in three dimensions. The user interface described herein may also enable a user to compare two differently configured graphics in a single two-dimensional map view in order to compare any discrepancies or misalignments. Additionally, the user interface may include a background ruler grid and enable manual or automatic actual world distance measurements between two points selected in a two-dimensional map view or a three-dimensional point cloud view.
The map editing features described herein include enabling a user to delete edges from a graph, add edges to a graph, and delete nodes from a graph. These changes may then affect which point cloud data is used to construct the final high-resolution map, as well as how the point cloud data associated with the different light detection and ranging scans are combined in the high-resolution map. Additionally, the user interface features herein may enable a user to adjust the alignment or coordination of two point clouds. For example, if the user identifies areas in the point cloud data where the map quality is not ideal due to misalignment or inaccuracy of one or more point clouds relative to another point cloud, the user may move the point cloud data to adjust the positioning of neighboring or redundant points from another light detection and ranging scan or capture.
FIG. 2 illustrates a block diagram showing the servers of FIGS. 1A and 1B in communication with a map editor device 202, according to one embodiment of a map editing environment 200. The administrative user may use the map editor device 202 to view, edit, and refine the intermediate data at various points in the high-resolution map generation process. For example, as described below, a user of the map editor device 202 may access such a user interface: the user interface enables a user to view and edit point cloud data and associated pose graphics data that may be stored in map data store 154 before server 130 generates final high resolution map data for use by one or more vehicles 120. The map editor device 202 may communicate with the server 130 via a network 204, which may be any of the network types described above in connection with the network 110. Network 204 may be the same network as network 110 or a different network. For example, in one embodiment, network 204 may be a local area network controlled by the operator of server 130.
As shown in fig. 2, in addition to the components shown in fig. 1A, the server 130 may include a map editing unit 210, a user interface unit 212, a map rendering unit 214, and map editor data 214. In the illustrated embodiment, the map editing unit 210 may be generally responsible for effecting changes to the original and intermediate high resolution map-related data both programmatically and in response to user-initiated requests from the map editor device 202. The user interface unit 212 may be responsible for generating various user interfaces to be described herein for display (e.g., by the map editor device 202), such as a user interface for enabling a user of the map editor device 202 to visualize and manipulate the point cloud data, pose graphical data, and intermediate and final high resolution map data. The map rendering unit 214 may generate a high-resolution map from the intermediate results, such as point cloud data and pose graphics data.
The stored map editor data 214 may include, for example, a log of changes made to the point cloud data and/or the pose graphic data by a user of the map editor device 202 so that the changes may be rolled back or undone. The map editor data 214 may also include information that is not required, for example, to generate the high resolution map itself, but that facilitates visualization and editing by the user. For example, such data may include colors assigned to the respective graphics for display in the user interface, user preferences regarding keyboard shortcuts manipulated by the graphics or point clouds, three-dimensional rendering or two-dimensional projection preferences (e.g., default zoom level, resolution, color scheme, zoom or rotation sensitivity, etc.), portions or regions marked by the user in the map for further review, and/or other data. In some embodiments, the map editor device 202 may be a computing system, such as a desktop or laptop computer or a mobile computing device (e.g., a smartphone or tablet device). The map editor device 202 may include or be in communication with a display device, such as a display monitor, touch screen display, or other well-known display device. The map editor device 202 may also include or communicate with a user input device, including but not limited to a mouse, a keyboard, a scrolling device, a touch screen display, a motion capture device, and/or a stylus.
In one embodiment, the map editor device 202 may be operative or executing an application (e.g., a browser or custom developed application) that receives a user interface generated by the server 130 (e.g., by the user interface unit 212), displays the user interface, and sends responses, instructions, or requests back to the server 130 based on selections made by a user of the map editor device within the user interface. The server 130 may then make changes to the data based on the user interaction and may send back an updated user interface for display by the map editor device. In another embodiment, the map editor device 202 may include a map editing unit, a user interface unit, and/or a map rendering unit (e.g., such units may be implemented in executable instructions of an application operated by the map editor device 202) such that the map editor device 202 need not communicate with the server 130 or any other system to generate a user interface to view and edit map data. For example, the map editor device 202 may load light detection and ranging data and/or intermediate data (e.g., pre-processed point cloud data and pose graphics) from the server 130 and then may not communicate again with the server 130 until the edited data or final high resolution map data is sent back to the server 130 to be stored in the data store 150 and distributed to one or more vehicles 120. In other embodiments, multiple functions may be implemented by the server 130 or the map editor device 202, depending on, for example, the hardware capabilities and network bandwidth considerations of each system in a given instance.
FIG. 3 is an illustrative user interface 300 of a reduced view including a three-dimensional point cloud rendering 320 and a two-dimensional map projection 310, where the two-dimensional map projection 310 includes graphical indicators 312, 314, and 316 representing areas of different light detection and ranging scans. As described above, each user interface (and associated three-dimensional rendering and/or two-dimensional projection that may be included therein) that will be described in connection with fig. 3-10 may be generated by the server 130 or the map editor device 202, depending on the embodiment, and may be presented for display by the map editor device.
Each region marked by graphical indicators 312, 314, and 316 may represent, for example, hundreds or thousands of individual light detection and ranging scans, the specific number depending on the zoom level of the current view. In one embodiment, a vehicle having one or more light detection and ranging sensors may be configured to capture scans periodically (e.g., every millisecond, every 10 milliseconds, every 100 milliseconds, every second, etc.) while driving through the streets represented in the two-dimensional map projection 310. Thus, the point cloud data captured by successive scans may thus partially overlap each other and may be matched and preprocessed by well-known automated methods to create intermediate point cloud results and pose graphics for generating the two-dimensional map projection 310 and the three-dimensional point cloud rendering 320. Such automated processes may include, for example, employing an Iterative Closest Point (ICP) algorithm that minimizes differences between adjacent Point clouds and assigns connections between the Point clouds represented by the nodes in the pose graph based on the match scores. However, in some cases, these automated processing methods may not be able to create optimal point cloud alignment and/or pose graph data. The user interfaces described herein, including user interface 300, may enable a user to visually identify potential inconsistencies, errors, misalignments, low quality involved, redundant data, and/or other problems that may remain after the automatic processing of light detection and ranging data.
Although the graphical indicators 312, 314, and 316 are each represented as different rectangles having different dashed or solid lines to distinguish their appearances from one another, this formatting is for illustration purposes only. The different dashed appearances may represent different colors so that the actual user interface presented may have, for example, a blue solid line for indicator 312, a red solid line for indicator 314, and a yellow solid line for indicator 316. In some embodiments, the color selected for a given indicator may indicate the quality of the scan where it is relatively certain, such that the red color indicates areas that may require attention or potential editing by the user. In other embodiments, the color or pattern may not have any meaning other than for visually distinguishing between different light detection and range scan data sets. The different groups may be, for example, scans captured by the same vehicle at different times, or scans captured by different vehicles. Although the graphical indicators in the user interface 300 are shown as rectangles, this is not intended as a limitation. In other embodiments, the graphical indicator may be other polygonal, circular, or elliptical shapes, and may not have straight or smooth edges (e.g., the scan area may be closely tracked such that the shape is approximately aligned with the shape of the street that the light detection and ranging capture vehicle is driving over).
The two-dimensional map projection 310 may be generated by a server or map editor device as a two-dimensional overhead projection of light detection and ranging point cloud data captured by vehicles on the ground. In other embodiments, the two-dimensional map data may be based at least in part on images captured from a camera on the ground (e.g., on a vehicle), in the air, or associated with images captured by a satellite. The user may select a point or region in the two-dimensional map projection 310 to view the corresponding three-dimensional point cloud data in the left portion of the user interface containing the three-dimensional point cloud rendering 320. The user may individually rotate, pan, and zoom a two-dimensional or three-dimensional view while the other views remain static. In other embodiments, other views may automatically adjust to match the panning, scrolling, selection, rotation, or scaling performed by the user in another view (e.g., the scrolling of the point cloud data presented in the three-dimensional point cloud view 320 in the two-dimensional representation 310 may be automatically updated). The user may zoom in or out of the two or three dimensional view using keyboard shortcuts, scroll wheels, touch screen gestures, or other means. For example, in embodiments other than those shown, buttons or other selectable options may be presented in the user interface 300 to enable scrolling, panning, rotating, selecting, and/or zooming in any of the views.
Fig. 4 is an illustrative user interface 400 including a magnified view of a three-dimensional point cloud rendering 420 and a two-dimensional map projection 410, including superimposed graphical indicators of nodes and connections within a pose graph associated with the point cloud data. The presented two-dimensional map view 410 may be displayed as a result of a user request to zoom in on the previously presented two-dimensional map view 310 discussed above with reference to fig. 3. For example, the user interface may be configured to switch between abstract representations or groupings of different styles of point cloud scans when a threshold zoom level is reached. For example, upon zooming in to a scale that satisfies a predetermined threshold, the two-dimensional map representation may change its graphic overlay data to present nodes and corresponding connections (representing graphic nodes and edges, respectively, in the gesture graphic) rather than higher-level abstractions or groupings, such as rectangles or polygons that define regions.
Each displayed node (e.g., nodes 412-415) may represent multiple scans that have been grouped together as graph nodes in the pose graph during processing (e.g., using ICP). For example, in one embodiment, each graph node may represent twenty adjacent or partially overlapping light detection and ranging scans captured in close proximity to each other in sequence (e.g., one per second). As shown, the nodes represented in the two-dimensional map representation 410 may have different appearances to represent that they are associated with different groups (e.g., captured at different times and/or captured by different sensors), different pose graphs, or different related sub-graphs. Connections may be presented between different nodes in the same or different groupings to illustrate that there is a partially overlapping point cloud between them, and that there is sufficient confidence in the match (e.g., by ICP determining another automated process and/or user input) to use them as neighboring groupings when generating high resolution maps. Although cross-hatching is used to illustrate the different appearances and groupings of node indicators in the figures, it should be recognized that these patterns may represent different colors in an actual user interface.
In the illustrated user interface 400, the user has selected graphical indicators for node 414 and node 412 that are colored in different colors (shown in different cross-hatching or patterns in the figure) to illustrate that they are part of different groupings and/or sub-graphs. In response to selecting each node or by selecting the "show options" option 434, once a node is selected, the three-dimensional point cloud view 420 may be updated to display the rendered point cloud data corresponding to the selected node. In a "color mode" (which may be one three-dimensional viewing option within a user interface), the three-dimensional rendering in the three-dimensional point cloud view 420 may be colored to match or correspond to the coloring of the corresponding nodes in the two-dimensional view, or otherwise visually indicate which sets of point cloud data come from different sources or groupings. Upon viewing the relative matches of the point cloud data of node 414 and node 412 in the three-dimensional point cloud view 420 (e.g., multiple sets of cloud points are rendered together at the same time), the user may determine that there are enough matches to add a new connection between nodes 414 and 412 by the user selecting the "connect nodes" option 430. The user may select to save this updated graphical data to add a new edge between the nodes represented by graphical indicators 414 and 412 in the stored pose graphical data, which will then be used by the server 130 and/or map editor device 202 to generate and/or update the high resolution map.
Fig. 5 is an illustrative user interface 500 including a three-dimensional point cloud rendering and a two-dimensional map projection 510, where two user-selected nodes have been removed. For example, as mentioned above in connection with fig. 4, a user may view the three-dimensional point cloud data associated with node indicators 414 and 412 discussed above. If instead of determining that the nodes match each other, the user instead determines that their point cloud data should not be used to generate a high resolution map, the user may select the "remove nodes" option 512 to delete the two nodes. For example, if the user determines that the corresponding point cloud data is of poor quality and/or redundant with respect to other point cloud data captured at or near the same location, the user may delete these nodes.
If this change is erroneous, the user may select the "undo" option 514, otherwise the "save graph" option 516, to delete two nodes and associated edges from the stored graph, or to mark nodes (and their associated point cloud data) in the stored graph data for omission in constructing the high resolution map. In some embodiments, the user may determine to delete a node based on a combination of visual information provided by the two-dimensional representation and the three-dimensional rendering. For example, the two-dimensional projection 510 may indicate that two nodes from different graphs or groupings substantially overlap or are in the same location, while the three-dimensional rendering of the point cloud data may provide information to the user as to which redundant node is associated with better quality point cloud data.
Fig. 6 is an illustrative user interface 600 of a reduced view including a three-dimensional point cloud rendering and a two-dimensional map projection 610, with changes made to the displayed pose graphical data based on user interaction with the user interface. The user selects to add a connection between nodes represented by the unconnected graphical indicators 612 and 614 in the user interface 400 presented previously in fig. 4. The user also removes nodes in the same area in order to remove redundant data and optimize the point cloud data. Based on these changes, the pose graph stored in the data store 150 can be changed by the server 130 and/or the map editor device 202 to reflect node deletion and edge addition as selected by the user through the user interface 600. Thus, the user may improve the quality and accuracy of a high resolution map that will be subsequently generated based on the combination of the altered pose graph data and the associated point cloud during map generation.
Fig. 7 is a flow diagram of an illustrative method 700 for providing user interface functionality that enables a user to view and edit point cloud and pose graphics data for use in generating high resolution maps. As described above, the map editor device 202 or the server 130 may perform the various steps described herein, depending on the embodiment. Thus, references to the systems in the flowchart descriptions of fig. 7 and 10 may refer to either the server 130 or the map editor device 202, depending on the embodiment. Many details of the various blocks of fig. 7 have been described previously above, and so in order to avoid repetition, they will be summarized below.
At block 702, the system may obtain light detection and ranging scan data and/or other sensor or camera data that may be used to generate a high resolution map. For example, as described above, the obtained sensor data may include radio detection and ranging, infrared camera images, inertial measurement unit data, and so forth. At block 704, the system may then assign individual light detection and ranging scans and/or other captured data to nodes in the pose graph. At block 706, the system may then perform point cloud mapping, filtering, and/or other automatic optimization of the point cloud and/or pose graph. These pre-processing or intermediate steps in creating a high resolution map are well known in the art and need not be described in detail herein. For example, point cloud matching and pose graph construction may be based in part on an iterative closest point algorithm.
At block 708, the system may generate a user interface: it includes an interactive graphical representation of pose graph data (including nodes and edges) as a two-dimensional rendering in a first portion of a user interface. Such a user interface has been described above in connection with fig. 4, for example. Next, at block 710, the system may display an interactive three-dimensional rendering in a second portion of the user interface, as described above in connection with the various user interfaces. At block 712, the system may receive, via the user interface, a user edit of at least one point cloud in the three-dimensional rendering or at least one graph node or edge in the two-dimensional rendering, as described above in connection with the example user interface.
Finally, at block 714, the system may incorporate user edits received via the user interface based on the two-dimensional graphical data and the corresponding three-dimensional point cloud data, generating a high-resolution map. Methods of generating high resolution maps therefrom, given the intermediate data of pose graphics and point clouds, are well known in the art and need not be described herein. However, the additional editing and optimization of intermediate results through the user interface described herein results in improved high resolution map results as compared to generating a map using prior art methods in the final step without the improved intermediate editing described herein.
FIG. 8 is an illustrative user interface 800 including a magnified view of a three-dimensional point cloud rendering 820 and a two-dimensional map projection 810, including a display of distance measurements between points selected by a user. The user can select (e.g., using a cursor to click a mouse or touch a touch screen) any two points within the two-dimensional view 810 or the three-dimensional view 820 to view a distance measurement between the points. For example, the user may select points 821 and 822 in the three-dimensional view 820 and then select the "three-dimensional measurement" option 825 to present the distance between these two points as a measurement 823. The distance may be measured by the computing system (map editor device 202 or server 130) using the (x, y, z) coordinates of each point in the three-dimensional virtual space. The distance may reflect the actual real-world distance between the captured light detection and ranging data points, and may employ user customizable units of measure and/or scale. Similarly, the user may make measurements in the two-dimensional view 810, e.g., selecting points 811 and 812 for presentation as measurements 813 after selecting the "two-dimensional measurements" option 815. In some embodiments, the corresponding measurement values and points may be automatically added to the view (two or three dimensional) rather than the user selecting the view in which the points are located, while in other embodiments the user may set different measurement points independently in each view. For example, a user may select points 811 and 812 in the two-dimensional view 810 and present as measurements 813. In response to the user selecting points 811 and 812, the three-dimensional view 820 may be automatically updated to display the selection of points 812 and 822 and measurements 823. The automatically selected points in views other than the view in which the user selected the points may correspond to the same or nearly the same geographic location as the user selected points (e.g., point 821 may be the same geographic location as point 811 and/or point 822 may be the same geographic location as point 812).
Fig. 9 is an illustrative user interface 900 of a three-dimensional point cloud rendering 902 including two point clouds 910 and 912 that enables a user to visually realign or match points in the respective point clouds. User interface 900 may be considered an "adjustment mode" interface that a user may enter by selecting an "enter adjustment mode" selectable option in a previously presented user interface. In other embodiments, the functionality provided in this adjustment mode may be directly accessible in any of the three-dimensional point cloud rendered views of the previously described user interface, and may be accessible while the two-dimensional map representation is still present in the view, and capable of interacting in the same user interface as the adjusted view.
In some embodiments, point cloud data 910 and point cloud data 912 may each represent one or more different light detection and ranging scans, where the real-world regions captured by the scans at least partially overlap one another. For example, the point clouds 910 and 912 may each be associated with different adjacent graphical indicators selected by the user in the two-dimensional map view for further analysis or editing by the user. Visual information in the two-dimensional view, such as the color of the graphical indicator or shading present in the vicinity of the node, may indicate to the user: these point clouds may require re-registration, realignment, or manual matching. To better facilitate visually assisted point cloud matching, the point cloud data 910 may be presented in one color, while the point cloud data 912 may be presented in a different color (e.g., contrasting colors). The user may select any of the displayed point cloud sets, and may then use the adjustment control 904 to move the selected point or adjust the yaw, pitch, or roll angles. As indicated, the adjustment options may have varying scales (e.g., a separate option may be used to move the points of the point cloud on a scale of 0.1 along the x-axis or 1.0 along the x-axis.
When the adjustment option 904 is presented as a keyboard accelerator key (e.g., a user pressing the "1" key moves the selected point cloud 0.1 to the left along the x-axis or pressing the "!" key moves the selected point cloud 0.1 to the right along the x-axis), other input methods may be used in other embodiments. For example, in other embodiments, the user may speak a command (e.g., "left 1," "scroll 0.5") or select a button or other selectable option in the user interface. In some embodiments, the system (e.g., the server 130 and/or the map editor device 202) may automatically generate prompts, techniques, or suggestions as to how the point clouds should be modified to better match each other, and these suggestions may be presented by text or voice, or the modifications may be done visually automatically and a user confirmation requested. For example, the system may identify two or more point clouds in which the edges are misaligned by less than a threshold distance (e.g., 0.1, 0.5, 1, etc. in the x-axis, y-axis, and/or z-axis) and/or a threshold angle (e.g., 1 °, 5 °, 10 °, etc. in the x-axis, y-axis, and/or z-axis). The system may calculate the amount by which one or more point clouds should be altered so that the edges are no longer misaligned. As an illustrative example, the system may identify that two edges of the point clouds 910 and 912 are misaligned by less than a threshold distance and/or angle. In response, the system may automatically generate prompts, tips, or suggestions as to how to alter the point clouds 910 and 912 to better match each other. Once the user is finished matching or re-coordinating cloud points, the user may select an "Add Current changes" option 920 or a "Save all changes" option 922. Selection of the save option may then cause the location and orientation of the new point clouds relative to each other to be stored in the data store 150 for subsequent use by the server 130 and/or the map editor device 202 in generating and/or updating the high-resolution map. The user can exit the adjustment mode and return to the previously presented user interface by selecting option 924.
Fig. 10 is a flow diagram of an illustrative method 1000 for enabling a user to visually edit the positioning of one or more point clouds for use in generating a high-resolution map, which may be viewed as a process of manually or semi-manually matching two sets of point cloud data through an interactive user interface. Now that many details of the various blocks of fig. 10 have been described above, they will be summarized below to avoid repetition. At block 1002, the system may load point cloud data for two or more point clouds (similar to the user interface description above) based on respective graphical selections by the user through the two-dimensional map view portion of the user interface. In some embodiments, the point cloud data may be retrieved from the data store 150 and loaded into Random Access Memory (RAM) of a server or map editor device, depending on the embodiment, for use by the system in rendering the point cloud data in a three-dimensional virtual space.
At block 1004, the system may render two or more point clouds, each of which may be generated by a different light detection and ranging scan (or different sets of light detection and ranging scans), for display in a user interface. For example, the point clouds may be captured at different times, by different sensors, or by applying different filtering or pre-processing to the corresponding light detection and ranging data. At block 1006, the system may receive a user selection (e.g., movement or rotation to better match other displayed point cloud data) of one of the two or more point clouds that the user is to manipulate.
Next, at block 1008, the system may receive one or more commands from the user to move and/or rotate the selected point cloud in the three-dimensional virtual space, as described above in connection with fig. 9. At block 1010, the system may then adjust the display position of the selected point cloud relative to other concurrently displayed point clouds in real-time in response to user commands. At block 1012, the system may store the adjusted point cloud location data for use in generating a new high resolution map, e.g., replacing previously stored data in the data store 150, in response to a user selection (as described above).
Other embodiments are possible within the scope of the invention, such as arranging, ordering, subdividing, organizing, and/or combining the above components, steps, blocks, operations, and/or messages/requests/queries/instructions differently with respect to the figures described herein. In some embodiments, different components may initiate or perform a given operation. For example, it should be appreciated that in other embodiments, operations described as involving collaboration or communication between the server 130 and the map editor device 202 may be implemented entirely by a single computing device (e.g., the server 130 communicating only with the display and the user input device, or the map editor device 202 executing only locally stored executable instructions for an application running on the map editor device).
Example embodiments
Some exemplary enumerated embodiments of the present invention are cited in this paragraph section in the form of a method, system, and non-transitory computer-readable medium, and not a limitation.
In one embodiment, the computer-implemented method described above comprises: point cloud data generated from a plurality of light detection and ranging (LiDAR) scans captured along a plurality of roads is obtained and then grouped to form a plurality of point cloud groupings comprising at least (a) first grouped point cloud data captured by light detection and ranging in a first geographic area during a first time period and (b) second grouped point cloud data captured by light detection and ranging in the first geographic area during a second time period, wherein at least a first portion of the first grouped point cloud data intersects at least a second portion of the second grouped point cloud data in three-dimensional space. The method may further comprise: a user interface is generated for display, wherein the user interface includes a two-dimensional map representation of at least a portion of the first geographic area, wherein the two-dimensional map representation is generated as a projection of at least a subset of the point cloud data. The method may then include: superimposing a first graphical indicator and a second graphical indicator within a two-dimensional map representation within the user interface, wherein the first graphical indicator indicates a first location of a first grouped point cloud data within the two-dimensional map representation, and wherein the second graphical indicator indicates a second location of a second grouped point cloud data within the two-dimensional map representation, and receiving a zoom-in request through the user interface. In response to the magnification request, the method may include updating the display of the two-dimensional map representation to include additional graphical overlay data, wherein the graphical overlay data includes a plurality of node indicators and corresponding connections between the respective node indicators, wherein the plurality of node indicators includes: (a) a first set of node indicators representing nodes in a first pose graph associated with the first set of point cloud data and (b) a second set of node indicators representing nodes in a second pose graph associated with the second set of point cloud data, and then receiving, via the user interface, a user selection of at least one node indicator in the first set of node indicators, wherein the at least one node indicator represents at least the first node in the first pose graph. The method may further comprise: generating, for display, a three-dimensional point cloud rendering of the point cloud data represented by the at least one node indicator within a different portion of the user interface than the two-dimensional map representation; and presenting selectable options within the user interface for manipulating at least the first gestural graphic, wherein the selectable options include at least one of (1) a first option to remove the first node from the first gestural graphic and (2) a second option to edit one or more connections of the at least one node indicator, wherein the editing includes at least one of deleting a connection, or adding a connection between the at least one node indicator and a different node indicator in the first or second set of node indicators. The method may include generating altered pose graph data for at least one of the first pose graph or the second pose graph based on a user selection of at least one of the first option or the second option, and generating a high resolution map based on the altered pose graph data and the point cloud data.
The computer-implemented method described above may further comprise: the high resolution map is stored in an electronic data store and transmitted over a network to a plurality of vehicles for use in navigating one or more of the plurality of vehicles. The first graphical indicator and the first set of node indicators may be displayed in a first color, wherein the second graphical indicator and the second set of node indicators are displayed in a second color, and wherein the first color is different from the second color.
According to another embodiment, a computer system may include a memory and a hardware processor in communication with the memory and configured with processor-executable instructions to perform certain operations. The operations may include obtaining point cloud data generated by a plurality of light detection and ranging (LiDAR) scans of a geographic area and then generating a user interface for display, where the user interface includes a two-dimensional map representation of at least a portion of the geographic area. The operations may also include overlaying graphical overlay data within the two-dimensional map representation within the user interface, wherein the graphical overlay data includes a plurality of node indicators and corresponding connections between respective node indicators, wherein the plurality of node indicators includes: (a) a first set of node indicators representing nodes in a first pose graph associated with the first set of point cloud data and (b) a second set of node indicators representing nodes in a second pose graph associated with the second set of point cloud data, and then receiving, via the user interface, a user selection of at least one node indicator in the first set of node indicators, wherein the at least one node indicator represents at least the first node in the first pose graph. The operations may also include, in response to the user selection, generating for display, within a different portion of the user interface than the two-dimensional map representation, a three-dimensional point cloud rendering of the point cloud data represented by the at least one node indicator. The operations may also include presenting selectable options within the user interface for manipulating at least the first positional graph, wherein the selectable options include at least one of (1) a first option to remove the first node from the first positional graph and (2) a second option to edit one or more connections of the at least one node indicator, wherein the editing includes at least one of deleting a connection, or adding a connection between the at least one node indicator and a different node indicator in the first or second set of node indicators. The operations may also include generating altered pose graph data for at least one of the first pose graph or the second pose graph based on a user selection of at least one of the first option or the second option, and generating a high resolution map based on the altered pose graph data and the point cloud data.
The operations of the computer system above may also include generating for display, within a different portion of the user interface than the two-dimensional map representation and while the three-dimensional point cloud rendering of point cloud data represented by the at least one node indicator is displayed, a second three-dimensional point cloud rendering of point cloud data represented by a second node indicator selected by the user within the two-dimensional map representation, wherein the second node indicator is located in a second set of node indicators.
In one embodiment, a three-dimensional point cloud rendering of the point cloud data represented by the at least one node indicator is displayed in a different color than the second three-dimensional point cloud rendering. In another embodiment, each individual node indicator in the first set of node indicators represents a plurality of light detection and ranging scans captured in proximity to one another. In another embodiment, the user selection belongs to the first option, and altered pose graph data is generated, the pose graph data including removing one or more point clouds associated with the at least one node indicator from consideration by the computer system when generating the high resolution map. In another embodiment, the user selection belongs to the second option, and altered pose graph data is generated, the pose graph data comprising adding a connection between the at least one node indicator from the first pose graph and the node indicator from the second pose graph. In another embodiment, the two-dimensional map representation is generated as a projection of at least a subset of the point cloud data.
In another embodiment of the computer system above, the user interface further provides a measurement function that enables a user to select any two points in the two-dimensional map representation, and the selection of two points uses the measurement function to cause the computer system to display a line between the two points and an automatically calculated distance measurement between the two points. In another embodiment, the operations further comprise automatically updating a three-dimensional point cloud rendering in the user interface to mark a second link within the three-dimensional point cloud rendering at a location in the three-dimensional virtual space corresponding to a location of the link displayed in the two-dimensional map representation. In another embodiment, the initial connection between the respective node indicators is at least partially during a point cloud matching process performed prior to the user interface for display, a confidence score generated by the computer system. In another embodiment, the point cloud matching process includes applying an iterative closest point algorithm.
According to another embodiment, a non-transitory computer-readable medium stores computer-executable instructions that, when executed by one or more computer systems, configure the one or more computer systems to perform particular operations. The operations may include generating, for display, a user interface, wherein the user interface includes a two-dimensional map representation of at least a portion of a geographic area, and in the two-dimensional map representation within the user interface, presenting graphical data including a plurality of node indicators and corresponding connections between the respective node indicators, wherein the plurality of node indicators includes (a) a first set of node indicators representing nodes in a first pose graph associated with a first grouping of point cloud data and (b) a second set of node indicators representing nodes in a second pose graph associated with a second grouping of point cloud data. The operations may also include receiving, by a user interface, a user selection of at least one node indicator of a first set of node indicators, wherein the at least one node indicator represents at least a first node in a first pose graph, and in response to the user selection, generating, within the user interface, a three-dimensional point cloud rendering of point cloud data represented by the at least one node indicator for display. The operations may also include presenting selectable options within the user interface for manipulating at least the first pose graph, wherein the selectable options include (1) a first option to remove the first node from the first pose graph and (2) a second option to edit one or more connections of the at least one node indicator, wherein the editing includes at least one of deleting a connection, or adding a connection between the at least one node indicator and a different node indicator of the first or second set of node indicators, and then generating altered pose graph data for at least one of the first pose graph or the second pose graph based on a user selection of at least one of the first option or the second option. The operations may also include generating a high resolution map based on the altered pose graph data and the point cloud data.
According to one embodiment, referring to the non-transitory computer readable medium above, the first set of node indicators is displayed in a different color than the second set of node indicators to visually indicate the respective pose graph for each individual node indicator. In another embodiment, each individual node indicator in the first set of node indicators and the second set of node indicators represents a plurality of light detection and ranging scans captured in proximity to each other. In another embodiment, the altered pose graph data is generated when the user selection belongs to the first option, the pose graph data including removing one or more point clouds associated with the at least one node indicator from consideration by the computer system when generating the high resolution map. In another embodiment, when the user selection belongs to the second option, the altered pose graph data is generated, the pose graph data comprising adding a connection between the at least one node indicator from the first pose graph and the node indicator from the second pose graph. In another embodiment, the user interface also provides a measurement function that enables a user to select any two points in the two-dimensional map representation or the three-dimensional point cloud rendering, and the selection of two points uses the measurement function to cause the computer system to display a line between the two points and an automatically calculated distance measurement between the two points.
According to one embodiment, a computer-implemented method described herein includes obtaining point cloud data created based at least in part on a plurality of light detection and ranging (LIDAR) scans of a geographic area, and generating for display a user interface, wherein the user interface includes a two-dimensional map representation of at least a portion of the geographic area, wherein the two-dimensional map is generated as a projection of at least a subset of the point cloud data, wherein the user interface includes a plurality of graphical indicators superimposed within the two-dimensional map representation, and wherein each graphical indicator represents a different set of one or more light detection and ranging scans. The method also includes receiving, by user interaction with the user interface, a user selection of at least a first graphical indicator and a second graphical indicator of the plurality of graphical indicators, wherein the first graphical indicator represents a first set of point cloud data and the second graphical indicator represents a second set of point cloud data, wherein the first set of point cloud data partially intersects at least a portion of the second set of point cloud data in three-dimensional space. The method also includes generating a three-dimensional rendering of the first and second sets of point cloud data, wherein relative display positions of the first and second sets of point cloud data in the three-dimensional rendering visually indicate that a first subset of points of the first set of point cloud data partially intersect a second subset of points of the second set of point cloud data, wherein the first subset of points is not fully aligned with the second subset of points, and updating the user interface to include the display of the three-dimensional rendering, wherein the first set of point cloud data is displayed in a first color and the second set of point cloud data is displayed in a second color, wherein the first color is different from the second color. The method also includes displaying a plurality of suggested commands within the user interface: the computer-readable medium includes instructions for altering a position of a first set of point cloud data in a three-dimensional virtual space to better match at least a first subset of points to a second subset of points, and receiving one or more user commands to edit the position of at least the first set of point cloud data in the three-dimensional virtual space, wherein the one or more user commands include at least one of a command to move the first set of point cloud data or a command to rotate the first set of point cloud data relative to the second set of point cloud data. The method also includes updating, in real-time, a display of at least the first set of point cloud data relative to the second set of point cloud data in response to the one or more user commands, and receiving, via the user interface, an indication to update the stored point cloud data to reflect the one or more user commands. The method may then include storing the adjusted point cloud data of at least the first set of point cloud data based on the one or more user commands, and generating a high-resolution map of the geographic area, wherein the high-resolution map is generated based at least in part on the adjusted point cloud data and other point cloud data from the plurality of light detection and ranging scans.
According to another embodiment, the computer-implemented method above may further comprise: the high resolution map is stored in an electronic data store and transmitted over a network to a plurality of vehicles for use in navigating one or more of the plurality of vehicles. According to another embodiment, a three-dimensional rendering is presented in a user interface while a two-dimensional map representation of the geographic area is still displayed in the user interface, wherein the three-dimensional rendering is presented in a different portion of the user interface than the two-dimensional map representation. According to another embodiment, the method may further include receiving, by user interaction with the two-dimensional map representation, a selection of a third graphical indicator of the plurality of graphical indicators, and updating a display of a three-dimensional rendering within the user interface to include a rendering of a third set of point cloud data associated with the third graphical indicator.
According to another embodiment, a computer system may include a memory and a hardware processor in communication with the memory and configured with processor-executable instructions to perform certain operations. The operations may include obtaining a first set of point cloud data and a second set of point cloud data, wherein the first and second sets of point cloud data are each based at least in part on a plurality of light detection and ranging (LiDAR) scans of a geographic area, and generating a three-dimensional rendering of the first set of point cloud data and the second set of point cloud data, wherein a relative display location of the first set of point cloud data and the second set of point cloud data in the three-dimensional rendering visually indicates a partial intersection between a first subset of points of the first set of point cloud data and a second subset of points of the second set of point cloud data, wherein the first subset of points is not fully aligned with the second subset of points. The operations may also include presenting, for display, a user interface, wherein the user interface includes a three-dimensional rendered display, and displaying, within the user interface, a plurality of suggested commands: for altering the positioning of the first set of point cloud data in the three-dimensional virtual space such that at least the first subset of points better matches the second subset of points. The operations may also include receiving one or more user commands to edit the positioning of at least the first set of point cloud data in the three-dimensional virtual space, wherein the one or more user commands include at least one of a command to move the first set of point cloud data or a command to rotate the first set of point cloud data relative to the second set of point cloud data, and updating, within the user interface, the display of at least the first set of point cloud data relative to the second set of point cloud data in real-time in response to the one or more user commands. The operations may also include receiving, through the user interface, an indication to update the stored point cloud data to reflect one or more user commands, and storing the adjusted point cloud data of at least the first set of point cloud data in the electronic data store based on the one or more user commands.
In another embodiment of the computer system above, the first set of point cloud data is displayed in a first color and the second set of point cloud data is displayed in a second color, wherein the first color is different from the second color. In one embodiment, the first set of point cloud data is generated from a plurality of light detection and ranging scans captured in proximity to each other. In one embodiment, the operations further comprise providing, via the user interface, an option to select a command and an associated scale for the command, wherein the scale represents a numerical quantity of at least one of: movement, yaw angle, pitch angle, or roll angle. In another embodiment, the proposed command includes movement along each of an x-axis, a y-axis, and a z-axis. In another embodiment, each suggested command is presented along with an indication of the associated keyboard accelerator key.
In another embodiment of the computer system above, the one or more user commands are received based on one or more keys input by a user, and the display is updated in response to the one or more user commands based in part on a predefined mapping of keys to commands. In another embodiment, the operations further comprise automatically determining a suggested spatial manipulation of the first set of point cloud data to better match at least the first subset of points with the second subset of points. In another embodiment, the operations further include automatically applying a suggested spatial manipulation within a three-dimensional rendering displayed in the user interface and prompting a user to agree to the suggested spatial manipulation. In yet another embodiment, the proposed spatial manipulation is determined based at least in part on determining that the first set of point cloud data is misaligned with the second set of point cloud data by less than a threshold value, wherein the threshold value represents at least one of a distance or an angle.
According to another embodiment, a non-transitory computer-readable medium stores computer-executable instructions that, when executed by one or more computer systems, configure the one or more computer systems to perform particular operations. The operations may include obtaining a first set of point cloud data and a second set of point cloud data, wherein the first and second sets of point cloud data are each based at least in part on a plurality of light detection and ranging (LiDAR) scans of a geographic area, and generating a three-dimensional rendering of the first set of point cloud data and the second set of point cloud data, wherein in the three-dimensional rendering, a first subset of points of the set of point cloud data at least partially intersect a second subset of points of the second set of point cloud data, and the first subset of points are not fully aligned with the second subset of points. The operations may also include presenting, for display, a user interface, wherein the user interface includes a display that is three-dimensional rendered, and presenting a plurality of options: for altering the positioning of the first set of point cloud data in the three-dimensional virtual space to better match at least the first subset of points with the second subset of points. The operations may also include receiving one or more user commands to edit the positioning of at least the first set of point cloud data in the three-dimensional virtual space, wherein the one or more user commands include at least one of a command to move the first set of point cloud data or a command to rotate the first set of point cloud data, and updating, in real-time, a display of the at least the first set of point cloud data relative to the second set of point cloud data within the user interface in response to the one or more user commands. The operations may also include storing the adjusted point cloud data of at least the first set of point cloud data in an electronic data store based on the one or more user commands.
According to one embodiment, referring to the non-transitory computer readable medium above, the plurality of options includes commands and an associated scale for each command, wherein the scale represents a numerical quantity of at least one of: movement, yaw angle, pitch angle, or roll angle. In another embodiment, the one or more user commands are received based on one or more keys input by a user, and the display is updated in response to the one or more user commands based in part on a predefined mapping of keys to commands. In another embodiment, the operations further comprise automatically determining a suggested spatial manipulation of the first set of point cloud data that better matches at least the first subset of points with the second subset of points. In another embodiment, the operations include automatically applying suggested spatial manipulations within a three-dimensional rendering displayed in the user interface. In another embodiment, the proposed spatial manipulation is determined based at least in part on determining that the first set of point cloud data is misaligned with the second set of point cloud data by less than a threshold value, wherein the threshold value represents at least one of a distance or an angle.
In other embodiments, one or more systems may operate according to one or more of the methods and/or computer-readable media recited in the preceding paragraphs. In still other embodiments, one or more methods may operate in accordance with one or more of the systems and/or computer-readable media recited in the preceding paragraphs. In still further embodiments, a computer-readable medium or media, excluding transitory propagating signals, may cause one or more computing devices having one or more processors and non-transitory computer-readable memory to operate in accordance with one or more of the systems and/or methods recited in the preceding paragraphs.
Term(s) for
Conditional language, such as "can," "might," "may," or "may," unless specifically stated otherwise, or otherwise understood in the context of usage, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps must be used in any way in one or more embodiments or that one or more embodiments necessarily include logic for determining: a determination is made as to whether these features, elements and/or steps are included or performed in any particular embodiment, with or without user input or prompting.
Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense, i.e., "including but not limited to". As used herein, the terms "connected," "coupled," or any variant thereof, refer to any direct or indirect connection or coupling between two or more elements; the coupling or connection between the elements may be physical, logical, or a combination thereof. Moreover, as used in this application, the words "herein," "above," "below," and words of similar import refer to this application as a whole and not to any particular portions of this application. Where the context permits, words using the singular or plural number may also include the plural or singular number, respectively. When taken in conjunction with a list of two or more items, the word "or" encompasses all of the following interpretations of the word: any one item in the list, all items in the list, and any combination of items in the list. Likewise, the term "and/or," when used in conjunction with a list of two or more items, encompasses all of the following interpretations of the word: any one item in the list, all items in the list, and any combination of items in the list.
In some embodiments, certain operations, actions, events, or functions of any of the algorithms described herein may be performed in a different order, may be added, merged, or eliminated altogether (e.g., not all are essential to performing the algorithms). In some embodiments, operations, actions, functions, or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores, or on other parallel architectures, rather than sequentially.
The systems and modules described herein may include software, firmware, hardware or any combination of software, firmware or hardware suitable for the purposes described. The software and other modules may reside on and execute on servers, workstations, personal computers, computerized tablets, PDAs, and other computing devices suitable for the purposes described herein. Software and other modules may be accessed through local computer memory, a network, a browser, or other means suitable for the purposes described herein. The data structures described herein may include computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combination thereof, suitable for the purposes described herein. User interface elements described herein may include elements from graphical user interfaces, interactive voice responses, command line interfaces, and other suitable interfaces.
Moreover, the processing of the various components of the illustrated system may be distributed across multiple machines, networks, and other computing resources. Two or more components of a system may be combined into fewer components. The various components of the illustrated system may be implemented in one or more virtual machines, rather than in dedicated computer hardware systems and/or computing devices. Likewise, the illustrated data stores may represent physical and/or logical data stores, including for example, storage area networks or other distributed storage systems. Furthermore, in some embodiments, the connections between the illustrated components represent possible paths of data flow, rather than actual connections between hardware. Although a few examples of possible connections are shown, in various implementations any subset of the components shown are capable of communicating with each other.
Embodiments are also described above in connection with flowchart illustrations and/or block diagrams for methods, apparatus (systems) and computer program products. Each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. Such instructions may be provided to a processor of a general purpose computer, special purpose computer (e.g., including a high performance database server, graphics subsystem, etc.), or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the actions specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a non-transitory computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the action specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computing device or other programmable data processing apparatus to cause a series of operational steps to be performed on the computing device or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computing device or other programmable apparatus provide steps for implementing the actions specified in the flowchart and/or block diagram block or blocks.
Any patents and applications and other references mentioned above, including any that may be listed in the accompanying filing documents, are incorporated herein by reference. Aspects of the invention can be modified, if necessary, to employ the various referenced systems, functions and concepts to provide yet further implementations of the invention. These and other modifications can be made to the invention in light of the above detailed description. While certain examples of the invention have been described above, and while the best mode contemplated, the invention may be practiced in many ways, no matter how detailed the text is presented. The details of the system may vary considerably in its specific embodiments while still being encompassed by the invention disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the invention should not be interpreted to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the invention with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the invention to the specific examples disclosed in the specification, unless such terms are explicitly defined in the above detailed description. Accordingly, the actual scope of the invention encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the invention under the claims.
Certain aspects of the invention are set out in certain claims below to reduce the number of claims, but applicants may contemplate other aspects of the invention in any number of claim forms. For example, while only one aspect of the invention is recited in the means-plus-function claims under 35u.s.csec.112(f) (AIA), other aspects can equally be embodied as the means-plus-function claims, or in other forms, such as in a computer-readable medium. Any claim intended to be treated pursuant to 35u.s.c. § 112(f) will begin with a "for" statement, but use of "for" in any other context does not imply that treatment pursuant to 35u.s.c. § 112(f) is referenced. Accordingly, the applicant reserves the right to add additional claims to this or a later application after filing the present application.

Claims (20)

1. A computer-implemented method, comprising:
obtaining point cloud data created based at least in part on a plurality of light detection and ranging scans of a geographic area;
generating a user interface for display, wherein the user interface comprises a two-dimensional map representation of at least a portion of the geographic area, wherein the two-dimensional map representation is generated as a projection of at least a subset of the point cloud data, wherein the user interface comprises a plurality of graphical indicators superimposed within the two-dimensional map representation, and wherein each of the graphical indicators represents a different set of one or more light detection and ranging scans;
receiving, through user interaction with the user interface, a user selection of at least a first graphical indicator and a second graphical indicator of the plurality of graphical indicators, wherein the first graphical indicator represents a first set of point cloud data and the second graphical indicator represents a second set of point cloud data, wherein the first set of point cloud data partially intersects at least a portion of the second set of point cloud data in three-dimensional space;
generating a three-dimensional rendering of the first and second sets of point cloud data, wherein relative display positions of the first and second sets of point cloud data in the three-dimensional rendering visually convey partial intersections between a first subset of points of the first set of point cloud data and a second subset of points of the second set of point cloud data, wherein the first subset of points and the second subset of points are not fully aligned;
updating the user interface to include a display of the three-dimensional rendering, wherein the first set of point cloud data is displayed in a first color and the second set of point cloud data is displayed in a second color, wherein the first color is different from the second color;
displaying, within the user interface, a plurality of suggested commands for altering a positioning of the first set of point cloud data in a three-dimensional virtual space to better match at least the first subset of points with the second subset of points;
receiving one or more user commands to edit a position of at least the first set of point cloud data in the three-dimensional (3D) virtual space, wherein the one or more user commands include at least one of a command to move the first set of point cloud data or a command to rotate the first set of point cloud data relative to the second set of point cloud data;
updating, in real-time, a display of at least the first set of point cloud data relative to the second set of point cloud data in response to the one or more user commands;
receiving, through the user interface, an indication to update the stored point cloud data to reflect the one or more user commands;
storing the adjusted point cloud data of at least the first set of point cloud data based on the one or more user commands; and
generating a high resolution map of the geographic area, wherein the high resolution map is generated based at least in part on the adjusted point cloud data and other point cloud data from the plurality of light detection and ranging scans.
2. The computer-implemented method of claim 1, further comprising:
storing the high-resolution map in an electronic data storage device; and
transmitting the high resolution map to a plurality of vehicles over a network for use in navigating one or more of the plurality of vehicles.
3. The computer-implemented method of claim 1, wherein the three-dimensional rendering is presented in the user interface while the two-dimensional map representation of the geographic area remains displayed in the user interface, wherein the three-dimensional rendering is presented in a different portion of the user interface than the two-dimensional map representation.
4. The computer-implemented method of claim 3, further comprising:
receiving, by user interaction with the two-dimensional map representation, a selection of a third graphical indicator of the plurality of graphical indicators; and
updating a display of the three-dimensional rendering within the user interface to include a rendering of a third set of point cloud data associated with the third graphical indicator.
5. A computer system, comprising:
a memory; and
a hardware processor in communication with the memory and configured with processor-executable instructions to perform operations comprising:
obtaining a first set of point cloud data and a second set of point cloud data, wherein the first and second sets of point cloud data are each based at least in part on a plurality of light detection and ranging scans of a geographic area;
generating a three-dimensional rendering of the first and second sets of point cloud data, wherein relative display positions of the first and second sets of point cloud data in the three-dimensional rendering visually convey partial intersections between a first subset of points of the first set of point cloud data and a second subset of points of the second set of point cloud data, wherein the first subset of points and the second subset of points are not fully aligned;
presenting a user interface for display, wherein the user interface includes a display of the three-dimensional rendering;
displaying, within the user interface, a plurality of suggested commands for altering a positioning of the first set of point cloud data in a three-dimensional virtual space to better match at least the first subset of points with the second subset of points;
receiving one or more user commands to edit the positioning of at least the first set of point cloud data in the three-dimensional virtual space, wherein the one or more user commands comprise at least one of a command to move the first set of point cloud data or a command to rotate the first set of point cloud data relative to the second set of point cloud data;
updating, in real-time within the user interface, a display of at least the first set of point cloud data relative to the second set of point cloud data in response to the one or more user commands;
receiving, through the user interface, an indication to update the stored point cloud data to reflect the one or more user commands; and
storing, in an electronic data storage device, the adjusted point cloud data of at least the first set of point cloud data based on the one or more user commands.
6. The computer system of claim 5, wherein the first set of point cloud data is displayed in a first color and the second set of point cloud data is displayed in a second color, wherein the first color is different from the second color.
7. The computer system of claim 5, wherein the first set of point cloud data is generated by a plurality of light detection and ranging scans in proximity to each other.
8. The computer system of claim 5, wherein the operations further comprise providing, via the user interface, an option to select a command and an associated scale of the command, wherein the scale represents a numerical quantity of at least one of: movement, yaw angle, pitch angle or roll angle.
9. The computer system of claim 5, the suggested command comprising a movement along each of an x-axis, a y-axis, and a z-axis.
10. The computer system of claim 5, wherein each of the suggested commands is presented along with an indication of an associated keyboard accelerator.
11. The computer system of claim 5, wherein the one or more user commands are received based on one or more keys input by a user, and wherein the updating of the display in response to the one or more user commands is based in part on a predefined mapping of keys to commands.
12. The computer system of claim 5, wherein the operations further comprise automatically determining a suggested spatial manipulation of the first set of point cloud data to better match at least the first subset of points with the second subset of points.
13. The computer system of claim 12, wherein the operations further comprise:
automatically applying the suggested spatial manipulation within the three-dimensional rendering displayed in the user interface; and
prompting the user to agree to the proposed spatial maneuver.
14. The computer system of claim 13, wherein the suggested spatial maneuver is determined based at least in part on determining that the first set of point cloud data is misaligned with the second set of point cloud data by less than a threshold, wherein the threshold represents at least one of a distance or an angle.
15. A non-transitory computer-readable medium storing computer-executable instructions that, when executed by one or more computer systems, configure the one or more computer systems to perform operations comprising:
obtaining a first set of point cloud data and a second set of point cloud data, wherein the first and second sets of point cloud data are each based at least in part on a plurality of light detection and ranging scans of a geographic area;
generating a three-dimensional rendering of the first and second sets of point cloud data, wherein a first subset of points of the first set of point cloud data at least partially intersect with a second subset of points of the second set of point cloud data in the three-dimensional rendering, wherein the first subset of points and the second subset of points are not fully aligned;
presenting a user interface for display, wherein the user interface includes a display of the three-dimensional rendering;
presenting a plurality of options for altering a positioning of the first set of point cloud data in a three-dimensional virtual space to better match at least the first subset of points with the second subset of points;
receiving a position in the three-dimensional virtual space to edit at least the first set of point cloud data, wherein the one or more user commands comprise at least one of a command to move the first set of point cloud data or a command to rotate the first set of point cloud data;
updating, in real-time within the user interface, a display of at least the first set of point cloud data relative to the second set of point cloud data in response to the one or more user commands; and
storing, in an electronic data storage device, the adjusted point cloud data of at least the first set of point cloud data based on the one or more user commands.
16. The non-transitory computer-readable medium of claim 15, wherein the plurality of options includes commands and an associated scale for each command, wherein the scale represents a numerical quantity of at least one of: movement, yaw angle, pitch angle or roll angle.
17. The non-transitory computer-readable medium of claim 15, wherein the one or more user commands are received based on one or more keys input by a user, and wherein the updating of the display in response to the one or more user commands is based in part on a predefined mapping of keys to commands.
18. The non-transitory computer-readable medium of claim 15, wherein the operations further comprise automatically determining a suggested spatial manipulation of the first set of point cloud data to better match at least the first subset of points with the second subset of points.
19. The non-transitory computer-readable medium of claim 18, wherein the operations further comprise:
automatically applying the suggested spatial manipulation within the three-dimensional rendering displayed in the user interface.
20. The non-transitory computer-readable medium of claim 18, wherein the suggested spatial maneuver is determined based at least in part on determining that the first set of point cloud data is misaligned with the second set of point cloud data by less than a threshold, wherein the threshold represents at least one of a distance or an angle.
CN201880100676.4A 2018-12-28 2018-12-28 Interactive three-dimensional point cloud matching Active CN113748314B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2018/067892 WO2020139373A1 (en) 2018-12-28 2018-12-28 Interactive 3d point cloud matching

Publications (2)

Publication Number Publication Date
CN113748314A true CN113748314A (en) 2021-12-03
CN113748314B CN113748314B (en) 2024-03-29

Family

ID=71129659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880100676.4A Active CN113748314B (en) 2018-12-28 2018-12-28 Interactive three-dimensional point cloud matching

Country Status (2)

Country Link
CN (1) CN113748314B (en)
WO (1) WO2020139373A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820953A (en) * 2022-06-29 2022-07-29 深圳市镭神智能系统有限公司 Data processing method, device, equipment and storage medium
CN115097976A (en) * 2022-07-13 2022-09-23 北京有竹居网络技术有限公司 Method, apparatus, device and storage medium for image processing
CN115097977A (en) * 2022-07-13 2022-09-23 北京有竹居网络技术有限公司 Method, apparatus, device and storage medium for point cloud processing
TWI810809B (en) * 2022-02-10 2023-08-01 勤崴國際科技股份有限公司 Geodetic Coordinate Processing Method for Street Signs
CN117841988A (en) * 2024-03-04 2024-04-09 厦门中科星晨科技有限公司 Parking control method, device, medium and equipment for unmanned vehicle

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112021998B (en) * 2020-07-20 2023-08-29 科沃斯机器人股份有限公司 Data processing method, measurement system, autonomous mobile device and cleaning robot
CN111929694B (en) * 2020-10-12 2021-01-26 炬星科技(深圳)有限公司 Point cloud matching method, point cloud matching equipment and storage medium
CN112578406B (en) * 2021-02-25 2021-06-29 北京主线科技有限公司 Vehicle environment information sensing method and device
CN113160405A (en) * 2021-04-26 2021-07-23 深圳市慧鲤科技有限公司 Point cloud map generation method and device, computer equipment and storage medium
US11887271B2 (en) * 2021-08-18 2024-01-30 Hong Kong Applied Science and Technology Research Institute Company Limited Method and system for global registration between 3D scans
CN114322987B (en) * 2021-12-27 2024-02-23 北京三快在线科技有限公司 Method and device for constructing high-precision map
CN116188660B (en) * 2023-04-24 2023-07-11 深圳优立全息科技有限公司 Point cloud data processing method and related device based on stream rendering

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100204964A1 (en) * 2009-02-09 2010-08-12 Utah State University Lidar-assisted multi-image matching for 3-d model and sensor pose refinement
CN103106680A (en) * 2013-02-16 2013-05-15 赞奇科技发展有限公司 Implementation method for three-dimensional figure render based on cloud computing framework and cloud service system
US20150123995A1 (en) * 2013-11-07 2015-05-07 Here Global B.V. Method and apparatus for processing and aligning data point clouds
US20160196687A1 (en) * 2015-01-07 2016-07-07 Geopogo, Inc. Three-dimensional geospatial visualization
CN108765487A (en) * 2018-06-04 2018-11-06 百度在线网络技术(北京)有限公司 Rebuild method, apparatus, equipment and the computer readable storage medium of three-dimensional scenic
CN109064506A (en) * 2018-07-04 2018-12-21 百度在线网络技术(北京)有限公司 Accurately drawing generating method, device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9024970B2 (en) * 2011-12-30 2015-05-05 Here Global B.V. Path side image on map overlay
WO2014080330A2 (en) * 2012-11-22 2014-05-30 Geosim Systems Ltd. Point-cloud fusion
US20160379366A1 (en) * 2015-06-25 2016-12-29 Microsoft Technology Licensing, Llc Aligning 3d point clouds using loop closures
CN108286976A (en) * 2017-01-09 2018-07-17 北京四维图新科技股份有限公司 The fusion method and device and hybrid navigation system of a kind of point cloud data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100204964A1 (en) * 2009-02-09 2010-08-12 Utah State University Lidar-assisted multi-image matching for 3-d model and sensor pose refinement
CN103106680A (en) * 2013-02-16 2013-05-15 赞奇科技发展有限公司 Implementation method for three-dimensional figure render based on cloud computing framework and cloud service system
US20150123995A1 (en) * 2013-11-07 2015-05-07 Here Global B.V. Method and apparatus for processing and aligning data point clouds
US20160196687A1 (en) * 2015-01-07 2016-07-07 Geopogo, Inc. Three-dimensional geospatial visualization
CN108765487A (en) * 2018-06-04 2018-11-06 百度在线网络技术(北京)有限公司 Rebuild method, apparatus, equipment and the computer readable storage medium of three-dimensional scenic
CN109064506A (en) * 2018-07-04 2018-12-21 百度在线网络技术(北京)有限公司 Accurately drawing generating method, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陶志鹏;陈志国;王英;吴冰冰;程思琪;: "海量三维地形数据的实时可视化研究", 科技创新与应用, no. 30, 28 October 2013 (2013-10-28) *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI810809B (en) * 2022-02-10 2023-08-01 勤崴國際科技股份有限公司 Geodetic Coordinate Processing Method for Street Signs
CN114820953A (en) * 2022-06-29 2022-07-29 深圳市镭神智能系统有限公司 Data processing method, device, equipment and storage medium
CN115097976A (en) * 2022-07-13 2022-09-23 北京有竹居网络技术有限公司 Method, apparatus, device and storage medium for image processing
CN115097977A (en) * 2022-07-13 2022-09-23 北京有竹居网络技术有限公司 Method, apparatus, device and storage medium for point cloud processing
CN115097976B (en) * 2022-07-13 2024-03-29 北京有竹居网络技术有限公司 Method, apparatus, device and storage medium for image processing
CN117841988A (en) * 2024-03-04 2024-04-09 厦门中科星晨科技有限公司 Parking control method, device, medium and equipment for unmanned vehicle
CN117841988B (en) * 2024-03-04 2024-05-28 厦门中科星晨科技有限公司 Parking control method, device, medium and equipment for unmanned vehicle

Also Published As

Publication number Publication date
WO2020139373A1 (en) 2020-07-02
CN113748314B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
US10976421B2 (en) Interface for improved high definition map generation
US12117307B2 (en) Interactive 3D point cloud matching
CN113748314B (en) Interactive three-dimensional point cloud matching
CN112204343B (en) Visualization of high definition map data
EP3673407B1 (en) Automatic occlusion detection in road network data
US11131554B2 (en) Systems and methods for vehicle telemetry
US20210004012A1 (en) Goal-Directed Occupancy Prediction for Autonomous Driving
US20200208998A1 (en) Systems and methods for safe route planning for a vehicle
CN104574953B (en) Traffic signals prediction
WO2021003452A1 (en) Determination of lane connectivity at traffic intersections for high definition maps
US11782129B2 (en) Automatic detection of overhead obstructions
US11983010B2 (en) Systems and methods for automated testing of autonomous vehicles
CN113874803A (en) System and method for updating vehicle operation based on remote intervention
CN110832417A (en) Generating routes for autonomous vehicles using high-definition maps
AU2018266108A1 (en) Destination changes in autonomous vehicles
US10915112B2 (en) Autonomous vehicle system for blending sensor data
CN113748316A (en) System and method for vehicle telemetry
CN113748448B (en) Vehicle-based virtual stop-line and yield-line detection
CN113728310A (en) Architecture for distributed system simulation
US11820397B2 (en) Localization with diverse dataset for autonomous vehicles
CN116724214A (en) Method and system for generating a lane-level map of a region of interest for navigation of an autonomous vehicle
US11989805B2 (en) Dynamic geometry using virtual spline for making maps
WO2020139377A1 (en) Interface for improved high definition map generation
CN114072784A (en) System and method for loading object geometric data on a vehicle
US20240241893A1 (en) Systems and methods for providing electronic maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant