CN107144285B - Pose information determination method and device and movable equipment - Google Patents
Pose information determination method and device and movable equipment Download PDFInfo
- Publication number
- CN107144285B CN107144285B CN201710317975.0A CN201710317975A CN107144285B CN 107144285 B CN107144285 B CN 107144285B CN 201710317975 A CN201710317975 A CN 201710317975A CN 107144285 B CN107144285 B CN 107144285B
- Authority
- CN
- China
- Prior art keywords
- semantic
- local
- map
- semantic map
- pose information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Navigation (AREA)
Abstract
A pose information determination method, a pose information determination device and a movable device are disclosed. The method is applied to a mobile device and comprises: receiving sample data of a current moving environment in which the movable device is moving, the sample data including position data and image data, acquired by an environment sensor; determining local pose information for the movable device based at least on the position data; constructing a local semantic map of the current mobile environment according to the local pose information of the mobile device and the image data, wherein the local semantic map comprises semantic entities and attribute information thereof in the current mobile environment; acquiring a global semantic map of the current mobile environment; matching the local semantic map in the global semantic map; and in response to matching the local semantic map in the global semantic map, updating the local pose information according to the result of the matching. Therefore, the pose information of the movable device can be efficiently and accurately obtained.
Description
Technical Field
The present application relates to the field of mobile devices, and more particularly, to a pose information determination method, apparatus, and mobile device.
Background
A mobile device (e.g., a robot) needs to locate its own position and orientation information (i.e., pose information) in real-time while performing a task, thereby performing destination decision planning, real-time obstacle sensing, and so on.
The existing high-precision positioning method mainly comprises two methods: one is that high-precision Global Positioning System (GPS)/real-time kinematic difference (RTK) cooperates with the high-end mobile receiving equipment, under the ideal situation without influence such as sheltering from, it can obtain the centimeter-level positioning accuracy; the other method is based on the traditional high-precision digital map and a high-precision sensor (usually a relatively expensive laser radar and a high-end integrated navigation system), and the high-precision sensor is used for collecting real-time data and matching the real-time data with the high-precision map so as to obtain high-precision positioning.
However, the first method requires an additional ground station, and the second method requires a special map collecting device, which is less practical. That is, the conventional method puts more positioning functions on the server device side (cloud), which significantly increases the overhead of the cloud infrastructure.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. Embodiments of the present application provide a pose information determination method, apparatus, mobile device, computer program product, and computer-readable storage medium, which can efficiently and accurately obtain pose information of a mobile device.
According to an aspect of the present application, there is provided a pose information determination method applied to a mobile device, the method including: receiving sample data of a current mobile environment in which the mobile device is moving acquired by an environment sensor, the sample data including location data and image data; determining local pose information for the movable device based at least on the position data; constructing a local semantic map of the current mobile environment according to the local pose information of the mobile device and the image data, wherein the local semantic map comprises semantic entities in the current mobile environment and attribute information thereof, the semantic entities are entities which can influence the movement, and the attribute information indicates physical characteristics of the semantic entities; acquiring a global semantic map of the current mobile environment; matching the local semantic map in the global semantic map; and in response to matching the local semantic map in the global semantic map, updating the local pose information according to the result of the matching.
According to another aspect of the present application, there is provided a pose information determination apparatus applied to a movable device, the apparatus including: a sample data receiving unit for receiving sample data of a current moving environment in which the movable device is moving acquired by an environment sensor, the sample data including position data and image data; a pose information determination unit for determining local pose information of the movable device at least from the position data; a semantic map construction unit, configured to construct a local semantic map of the current mobile environment according to the local pose information of the mobile device and the image data, where the local semantic map includes semantic entities in the current mobile environment and attribute information thereof, the semantic entities are entities that may affect movement, and the attribute information indicates physical characteristics of the semantic entities; the semantic map acquisition unit is used for acquiring a global semantic map of the current mobile environment; a semantic map matching unit for matching the local semantic map in the global semantic map; and a pose information updating unit for updating the local pose information according to a matching result in response to matching to the local semantic map in the global semantic map.
According to another aspect of the present application, there is provided a mobile device comprising: a processor; a memory; and computer program instructions stored in the memory, which, when executed by the processor, cause the processor to perform the pose information determination method described above.
According to another aspect of the present application, there is provided a pose information determination system including: the above-described movable apparatus; and the server equipment is used for storing the global semantic map of the current mobile environment.
According to another aspect of the present application, there is provided a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the pose information determination method described above.
According to another aspect of the present application, there is provided a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to execute the pose information determination method described above.
Compared with the prior art, by adopting the pose information determining method, the pose information determining apparatus, the mobile device, the computer program product and the computer readable storage medium according to the embodiments of the present application, sample data of a current moving environment in which the mobile device is moving, which is acquired by an environment sensor, may be received, the sample data including position data and image data; determining local pose information for the movable device based at least on the position data; constructing a local semantic map of the current mobile environment according to the local pose information of the mobile device and the image data, wherein the local semantic map comprises semantic entities in the current mobile environment and attribute information thereof, the semantic entities are entities which can influence the movement, and the attribute information indicates physical characteristics of the semantic entities; acquiring a global semantic map of the current mobile environment; matching the local semantic map in the global semantic map; and in response to matching the local semantic map in the global semantic map, updating the local pose information according to a result of the matching. Therefore, the pose information of the movable device can be efficiently and accurately obtained.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 illustrates a block diagram of a pose information determination system according to an embodiment of the present application.
Fig. 2 illustrates a flowchart of a pose information determination method according to an embodiment of the present application.
Fig. 3 illustrates a flowchart of local pose information determination steps according to an embodiment of the present application.
FIG. 4 illustrates a flow chart of local semantic map construction steps according to an embodiment of the present application.
Fig. 5 illustrates a flow chart of semantic map matching steps according to an embodiment of the application.
FIG. 6 illustrates a flow diagram of a category attribute matching based sub-step according to an embodiment of the present application.
Fig. 7 illustrates a flowchart of a local pose information updating step according to an embodiment of the present application.
Fig. 8 illustrates a flow chart of additional steps of a pose information determination method according to an embodiment of the present application.
Fig. 9 illustrates a schematic diagram of a pose information determination scene according to a first specific example of an embodiment of the present application.
Fig. 10 illustrates a schematic diagram of a pose information determination scene according to a second specific example of the embodiment of the present application.
Fig. 11 illustrates a block diagram of a pose information determination apparatus according to an embodiment of the present application.
FIG. 12 illustrates a block diagram of a removable device according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Summary of the application
As described above, the conventional high-precision positioning method has the following problems:
1) the support equipment is complex and expensive: the differential positioning method needs comprehensive large-scale coverage of base stations and needs cloud integration communication correction; the traditional high-precision map positioning method also needs high-cost equipment such as a high-end integrated navigation system, an expensive laser radar and the like;
2) the support system is complex, the map maintenance cost is high: the large-scale differential positioning needs a cloud end to operate a complex resolving algorithm to fuse correction data of a large number of base stations to reduce noise and provide real-time access of a large number of mobile terminals; the traditional high-precision map is usually manufactured by collecting data for many times by adopting expensive professional sensors, the manufacturing cost is very high, a large number of collecting ends are difficult to collect and update in real time, and meanwhile, a large number of collected data can be processed in a complex and time-consuming off-line mode (manual work is probably needed) to obtain the high-precision map.
In view of the problems in the prior art, the basic idea of the present application is to propose a pose information determination method, a pose information determination apparatus, a mobile device, a computer program product, and a computer-readable storage medium, which can alleviate the infrastructure overhead of a server device by promoting the intelligence of the mobile device. Specifically, the mobile device (e.g., a vehicle) first obtains a local semantic map and a positioning result according to common hardware, and the local semantic map and the positioning result are used for decision control of automatic driving of the vehicle.
In other words, with the inventive concept, the mobile device can utilize normal GPS plus structured semantic tags for pose calculation, and a special differential GPS base station is not necessary. The system makes full use of the existing traffic infrastructure of the mobile environment (such as roads), so the system has the characteristic of wide application range at low cost. The traditional positioning method mainly depends on cloud facilities and calculation, and is low in instantaneity and reliability. On the contrary, in the embodiment of the application, through more intelligent mobile equipment, the self positioning capability is greatly enhanced, so that the system can make a real-time decision more intelligently and quickly according to the scene, and obtain a global positioning result under the condition that the structured semantic information is enough, therefore, the system is more rapid, reliable and intelligent on the whole.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
Fig. 1 illustrates a block diagram of a pose information determination system according to an embodiment of the present application.
As shown in fig. 1, the pose information determination system according to the embodiment of the present application includes a movable device 100 and a server device 200.
The mobile device 100 may be moving in a known or unknown mobile environment. When the mobile terminal moves in an unknown mobile environment without a priori semantic map, local pose calculation can be carried out according to collected environment sample data, a new local semantic map of the unknown mobile environment is built, and movement control is carried out according to a local positioning result. When the mobile terminal moves in a known mobile environment with a priori semantic map, the priori semantic map and the local semantic map can be utilized for semantic matching, so that the local positioning accuracy of the mobile terminal is improved, and a more accurate and efficient mobile control strategy is generated. In addition, the removable device 100 may also upload the local semantic map to the server device 200.
The server device 200 may receive a download request of the removable device 100, detect whether there is an a priori semantic map of the current mobile environment in which the removable device 100 is moving, and if so, provide the a priori semantic map to the removable device 100. In addition, the server device 200 may further receive a local semantic map uploaded by the mobile device 100, and fuse the local semantic map and a priori global semantic map to implement a dynamic learning process of the semantic map.
For example, the removable device 100 may be any type of electronic device capable of moving in a mobile environment. For example, the mobile environment may be an indoor environment and/or an outdoor environment, for example. Also, the movable apparatus 100 may be a movable robot for various purposes, for example, may be a vehicle such as a vehicle, an aircraft, a spacecraft, a water vehicle, or the like. Of course, the present application is not limited thereto. For example, it may be a floor sweeping robot, a window cleaning robot, an air cleaning robot, a security robot, a home appliance management robot, a reminder robot, a patrol robot, etc.
The server device 200 may be a cloud server, which has a strong operation processing capability and may include a plurality of processing engines to fuse semantic maps. Of course, the present application is not limited thereto. The server device 200 may also be located, for example, wholly or partially, on the local side of the removable device 100 and form the architecture of a distributed server.
It is to be noted that the pose information determination system shown in fig. 1 is shown only for the convenience of understanding the spirit and principle of the present application, and the embodiments of the present application are not limited thereto. For example, the removable device 100 and/or the server device 200 may be one or more.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings. For convenience of description, a vehicle traveling on a road and a cloud server will be explained as examples of the mobile device 100 and the server device 200, respectively.
Exemplary method of a Mobile device
First, a pose information determination method applied to the movable apparatus 100 according to an embodiment of the present application will be described.
Fig. 2 illustrates a flowchart of a pose information determination method according to an embodiment of the present application.
As shown in fig. 2, the pose information determination method according to the embodiment of the present application may be applied to a movable apparatus 100, and includes:
in step S110, sample data of a current movement environment in which the movable device is moving, which is acquired by an environment sensor, is received, the sample data including position data and image data.
A mobile device 100 (e.g., a vehicle) may move in a moving environment (e.g., a road) while it may capture sample data of the moving environment with environmental sensors equipped thereon.
For example, the environment sensor may be used to acquire sample data of the current mobile environment in which the mobile device is moving, and it may be various types of sensors. For example, it may include: an image sensor, which may be a camera or a camera array, for capturing image data; a laser sensor for capturing scan data; a GPS device for acquiring real-time location coordinates of the mobile device 100; an RTK device for performing real-time dynamic positioning based on a carrier phase observation value, an Inertial Measurement Unit (IMU) device for performing positioning based on a three-axis attitude angle (or angular rate) and an acceleration of an object, and the like. Of course, the environment sensor may be other various devices as long as it can generate sample data for describing a mobile environment. It should be noted that, in the embodiment of the present application, the environmental sensor does not need to use a high-end sensor, but may be a low-cost acquisition device.
For example, basically, the sample data acquired by the environment sensor may include position data of the vehicle and image data of the environment around the vehicle. For example, the position data may be absolute position coordinates (e.g., latitude and longitude directly obtained by a GPS device) or relative position coordinates (e.g., accumulated motion parameters such as distance and the like obtained from the number of turns, speed and the like of the vehicle wheels) of the mobile device 100. The image data may be a camera vision image or a laser scanning image. Still further, the sample data may also include attitude data, which may be an absolute orientation angle obtained by differential GPS, or a relative orientation angle (e.g., an accumulated motion parameter such as direction, etc. derived from a turning angle of a wheel of the vehicle, etc.). For example, the pose data may be combined with position data to form pose data.
In step S120, local pose information of the movable device is determined from at least the position data.
In one example, local pose information for the movable device in the current mobile environment may be determined based solely on position data (and pose data).
Simply, local pose information of the movable device in the current moving environment may be determined from the acquired position data only. For example, the GPS longitude and latitude parameters of the vehicle may be directly used as its local position coordinates, and the local orientation angle may be calculated by the difference between the front and rear frames of GPS parameters, so as to generate local pose information in combination. Since this pose information is obtained only by means of GPS, it is an absolute local pose information.
Alternatively, local pose information of the movable device in the current moving environment may also be determined from the acquired position data and pose data (pose data). For example, the local pose information of the vehicle can be determined according to initial pose information when the vehicle is started and parameters such as the accumulated movement distance and direction obtained by the IMU. Because the pose information is obtained only by motion changes, the pose information is relative local pose information.
Because the GPS may have factors such as shielding, multipath effect and the like to influence the positioning accuracy, and the accumulated motion parameters may also have factors such as error accumulation and the like to generate positioning deviation, further, the two results can be fused to obtain more accurate and reliable local pose information.
In another example, image data may be further introduced in determining local pose information of the movable device. Next, description will be made with reference to fig. 3.
Fig. 3 illustrates a flowchart of local pose information determination steps according to an embodiment of the present application.
As shown in fig. 3, step S120 may include:
in sub-step S121, absolute pose information of the movable device is determined at least from the position data.
In sub-step S122, relative pose information of the movable device is determined at least from the image data.
In sub-step S123, the local pose information is generated from the absolute pose information and the relative pose information.
In addition to the positioning means in the traditional sense of GPS, IMU, real-time visual images and/or scanned images acquired by cameras and/or laser sensors equipped in the vehicle, after being processed by algorithms, can also be obtained as local odometers. The local odometer may, for example, embody a local motion difference of the vehicle between two frames of images, which may calculate a local relative displacement and steering, etc. of the vehicle between the two frames. In other words, by fusing the IMU, the GPS and the camera local odometer, a more robust and more accurate local positioning result can be obtained. For example, such a fusion process may be to correct local pose information that is relatively noisy.
In step S130, a local semantic map of the current mobile environment is constructed according to the local pose information of the mobile device and the image data, the local semantic map includes semantic entities in the current mobile environment and attribute information thereof, the semantic entities are entities that may affect movement, and the attribute information indicates physical characteristics of the semantic entities.
After determining the local pose information of the movable device, a local semantic map may be further constructed from it and image data in the sample data.
FIG. 4 illustrates a flow chart of local semantic map construction steps according to an embodiment of the present application.
As shown in fig. 4, step S130 may include:
in sub-step S131, semantic entities in the current mobile environment are detected from the image data.
For example, it may be detected from the acquired image data which semantic entities are present in the current mobile environment. For example, the semantic entities may be entities that may affect the movement of the removable device 100 itself. Of course, the present application is not limited thereto. For example, more broadly, it may also be an entity that may affect the movement of other objects of interest besides the mobile device 100. This is because, for example, although the mobile device 100 is a vehicle, it may also take into account other traffic entities (e.g., pedestrians, bicycles, etc.) that may be present on the road as potential subjects for future use of the map in constructing the map.
For example, where the mobile device 100 is a vehicle, the semantic entities may be drivable roads, curbs, traffic signs (e.g., solid signs such as signal lights, cameras, guideboards, pavement signs such as lane lines, stop lines, crosswalks, etc.), isolation belts, green belts, and the like.
Generally, the semantic entities follow a specification and have a specific meaning. For example, it may have a particular geometric shape (e.g., circle, square, triangle, bar, etc.) or may have a particular feature identifier (e.g., two-dimensional code, etc.). In addition, it may be painted with a stop mark, a slow line mark, a front rockfall mark, etc., thereby embodying its meaning.
Specifically, the substep S131 may include: carrying out detection tracking identification on the image data; and determining semantic entities in the current mobile environment according to the detection tracking identification result.
For example, each vehicle can perform detection, tracking and identification of semantic entities according to local calculation. Specifically, feature representations in the image data may be extracted by a machine learning model trained in advance based on a large number of samples, and semantic entity detection is performed according to the feature representations. For example, the machine learning model may be implemented using various machine learning models such as convolutional neural networks, deep belief networks, and the like.
In sub-step S132, attribute information of the semantic entity is determined according to the local pose information and the image data.
Next, both the local pose information of the mobile device and the semantic entities detected based on the image data may be integrated to determine which attribute information each semantic entity includes.
For example, the attribute information may indicate physical characteristics of the semantic entity, such as attributes that the semantic entity may affect movement of the removable device 100 itself. Similarly, more broadly, it may also indicate that the semantic entity may affect the attributes of other objects of interest than the mobile device 100.
For example, basically, the attribute information may be spatial attribute information such as a position, a shape, a size, and an orientation of each semantic entity. Further, the attribute information may be category attribute information of the respective semantic entities (such as whether each semantic entity is a feasible road, a road edge, a lane and lane line, a traffic sign, a road sign, a traffic light, a stop line, a crosswalk, a roadside tree or a pillar, or the like).
For example, the substep S132 may include: determining a relative positional relationship between the semantic entity and the movable device from the image data; and determining the spatial attribute information of the semantic entity according to the local pose information and the relative position relation.
For example, after each vehicle detects, tracks and identifies the semantic entity according to local calculation, the spatial attribute of the semantic mark is calculated according to the tracking sequence and the vehicle fusion GPS/RTK and the semantic positioning position direction. For example, the spatial attributes may include various attributes related to spatial characteristics such as size, shape, orientation, height, occupancy, etc. of the semantic tags.
In addition to the spatial attribute information, for example, the category of each semantic entity may be further determined based on image data (e.g., the result of detection, tracking, and recognition of an image).
In sub-step S133, the local semantic map is constructed from the semantic entities and their attribute information.
Once the semantic entities and their attribute information included in the current mobile environment are determined, these information can be integrated to construct a local semantic map based on the current frame sample data. Namely, the semantic mark result of each frame is reconstructed and added with attributes such as position size and the like, and a semantic mark map with absolute attributes is obtained.
In step S140, a global semantic map of the current mobile environment is obtained.
Before, after, or simultaneously with step S110, a global semantic map generated in advance may be acquired to determine which semantic entities are present in the current mobile environment according to a priori information.
For example, the global semantic map may be stored in the removable device 100, e.g., it may have been previously downloaded from the server device 200, or it may have been stored locally after the removable device 100 was previously built. Alternatively, the global semantic map may also be downloaded from the server device 200 in real time.
Generally speaking, for a high-precision global semantic map, a crowd-sourced online learning manner can be adopted in the system to generate the high-precision semantic map. That is, each removable device 100 in the system may upload a locally generated local semantic map to the server device 200 (e.g., cloud). Once the server device 200 obtains the local semantic map from the mobile device 100 (vehicle), it can analyze which road the vehicle is currently on, which semantic entities (feasible roads, curbs, lanes and lane lines, traffic signs, road surface signs, traffic lights, stop lines, pedestrian crossings, roadside tree pillars, etc.) are on the road, and their corresponding attributes (position, size, orientation, category, etc.), and fuse these semantic entities and their attributes continuously to make the map more complete and improve its accuracy.
That is, when a road does not have a high-precision map, a crowd-sourced vehicle equipped with relevant devices and algorithms passes through the road to generate a partial high-precision map of the road (limited to the consideration of the camera view angle and the confidence coefficient of semantic mark, the mobile vehicle may upload only a partial map), and as the number of times of passing the road increases, on one hand, the integrity of the map becomes better and better, and on the other hand, the precision of the map (the precision of the attributes of semantic mark, such as position coordinate, category, size, orientation, etc.) also increases.
In one example, the step S140 may include: downloading the global semantic map from a server device according to the location data.
For example, the mobile device 100 (vehicle) may communicate with the server device 200 (cloud server) and attempt to obtain an a priori global semantic map of the current mobile environment.
For example, the a priori global semantic map may be obtained from the server device 200 from previously determined coarse GPS coordinates of the vehicle. Of course, the prior semantic map may also be downloaded based on the local pose information corrected by the image data. Alternatively, the map or the like may be acquired from the movement locus of the vehicle.
If the global semantic map is not obtained, the situation shows that the current road has no crowdsourced vehicle to walk once. Then, a movement control policy for controlling the movable device to enable it to achieve a predetermined movement purpose in the current moving environment may be generated directly from a non-semantic digital map (a digital map in the conventional sense) and coarse GPS coordinates (or corrected local pose information, etc.) of the current moving environment. For example, local sensors are used in combination with existing digital maps to perform local low-speed path planning and control, such as determining a target road to be traveled, performing planning control on feasible areas and feasible roads according to local mobile terminals, and performing lane change or cruise according to the obtained real-time accurate lane lines.
If a prior global semantic map is obtained, the method proceeds to step S150.
In step S150, the local semantic map is matched in the global semantic map.
If the mobile device 100 (vehicle) obtains a global semantic map a priori from the server device 200 (cloud server), it indicates that a crowd-sourced vehicle has walked the current road before. Then, the global semantic map and the local semantic map may be matched to obtain a map positioning result.
In step S160, in response to matching the local semantic map in the global semantic map, the local pose information is updated according to the result of the matching.
When a road has all or part of a map, the semantic mark result of the camera video sequence is matched and searched with the cloud map to obtain a map positioning result, and then the map positioning result is fused with the IMU, the GPS and the odometer to obtain better pose output.
Next, the execution process of steps S150 and S160 will be specifically described.
Fig. 5 illustrates a flow chart of semantic map matching steps according to an embodiment of the application.
As shown in fig. 5, step S150 may include:
in sub-step S151, the global semantic map is parsed to determine semantic entities and attribute information thereof.
Once the mobile device 100 (vehicle) obtains the global semantic map from the server device 200 (cloud server), it can resolve which roads the current mobile environment includes, which semantic entities on each road (feasible roads, curbs, lanes and lane lines, traffic signs, road surface signs, traffic lights, stop lines, pedestrian crossings, roadside tree pillars, etc.), and their corresponding attributes (location, size, orientation, category, etc.).
In sub-step S152, semantic entity matching pairs in the global semantic map and the local semantic map are found according to the attribute information.
For example, simply, the local semantic map may be matched with the global semantic map of the cloud according to the position coordinates of each semantic entity. That is, the matching operation described above is performed using the coarse GPS coordinates of the vehicle.
However, positioning based on location coordinates alone may not achieve better matching because GPS may have factors such as occlusion, multipath, etc. affecting its positioning accuracy, while accumulated motion parameters may also have factors such as error accumulation, etc. causing positioning bias.
In a first example, a metric-scale matching operation may be performed based on category attributes of semantic entities. The measurement zoom matching is that the local map and the real map have scale difference, so the scale influence is ignored or a scale factor is added in the matching process. The measurement refers to attribute information such as relative position, shape size, height size and the like obtained according to the local pose information.
FIG. 6 illustrates a flow diagram of a category attribute matching based sub-step according to an embodiment of the present application.
As shown in fig. 6, the substep S152 may include:
in action S1521, valid semantic entities in the local semantic map are determined, which are semantic entities whose confidence is greater than or equal to a predetermined threshold.
In action S1522, a valid localization direction and a non-valid localization direction of the valid semantic entity are determined according to the attribute information of the valid semantic entity.
In act S1523, metric scale matching is performed on the local semantic map in the non-valid positioning direction.
In action S1524, the semantic entity matching pairs are found by scaling the results of the matching according to the metric.
For example, in order to obtain a more accurate matching result and improve the matching confidence, different matching scaling algorithms may be adopted according to the most effective semantic mark (semantic entity) type in the obtained high-precision semantic map and local semantic map (which may be a single-frame semantic map generated based on single-frame sample data or a multi-frame semantic map formed by combining a plurality of single-frame semantic maps), and the current local pose precision may be corrected according to the matching mark type.
For example, different types of semantic markings in a transportation facility have different directional positioning effects, wherein three-dimensional markings such as road signs, signboards, lateral intersections, etc. can correct the positioning accuracy of a vehicle in the longitudinal direction (i.e., the direction of lane extension, which is generally parallel to the direction of vehicle travel), and road signs such as lane lines, road edges, etc. can correct the positioning accuracy of a vehicle in the lateral direction (i.e., the direction of lane departure, which is generally offset from the direction of vehicle travel, such as perpendicular to or intersecting at an angle with the direction of vehicle travel).
For example, under the condition that the lane lines and the road edges are reliable, the longitudinal direction of the local semantic map can be subjected to metric scaling matching; and under the condition that the traffic lights and the guideboards are reliable, the local semantic map can be transversely subjected to measurement scaling matching. Therefore, the matching process of the local semantic map and the global semantic map has certain scale robustness.
Fig. 7 illustrates a flowchart of a local pose information updating step according to an embodiment of the present application.
As shown in fig. 7, the substep S160 may include:
in sub-step S161, in the non-valid localization direction, the attribute information of the semantic entity in the local semantic map among the semantic entity matching pairs is corrected according to the attribute information of the semantic entity in the global semantic map among the semantic entity matching pairs.
In sub-step S162, the local pose information is corrected according to the modified attribute information of the semantic entity in the local semantic map.
Since the valid semantic entities in the local semantic map have high confidence in the valid localization directions, the attribute information (e.g., position, distance, shape, height, etc.) of the valid semantic entities can be modified in the non-valid localization directions only according to the attribute information of the matching semantic entities in the global semantic map, thereby updating the local pose information of the mobile device. Of course, the updating can be performed in the effective positioning direction and the ineffective positioning direction at the same time, so as to obtain a more comprehensive effect.
Alternatively or additionally, in a second example, the topology metric semantic matching operation may be performed based on category attributes of the semantic entities. The topological measurement semantic matching is because if matching is performed based on only a single frame semantic map or a few frames of semantic maps, several matches may be searched in a nearby map at the same time, but actually only one of the maps may be a real match, and therefore, those mismatches can be removed by matching the track semantic maps spliced by the several frames of semantic maps. The measurement refers to attribute information such as relative position, shape size, height size and the like obtained according to the local pose information. Topology refers to the order of appearance of the respective semantic entities, e.g., relational attributes such as front-back, left-right, up-down, and the like (without considering specific values).
As shown in fig. 6, the sub-step S152 may further include:
in action S1525, performing topology metric semantic matching using the local semantic map and the global semantic map.
In action S1526, the semantic entity matching pairs are found according to the result of the topological metric semantic matching.
When similar repeated scenes exist nearby in the local semantic map, the established local semantic map and the acquired high-precision semantic map can be used for carrying out overall topological measurement semantic matching so as to correct the current local pose precision according to the matching mark types.
Accordingly, as shown in fig. 7, the sub-step S160 may further include:
in sub-step S163, a valid localization direction of the semantic entity is determined according to the attribute information of the semantic entity in the global semantic map among the semantic entity matching pairs.
In sub-step S164, in the effective localization direction, the attribute information of the semantic entity in the local semantic map among the semantic entity matching pairs is modified according to the attribute information of the semantic entity in the global semantic map among the semantic entity matching pairs.
In sub-step S165, the local pose information is corrected according to the modified attribute information of the semantic entities in the local semantic map.
Similar to the first example, since different categories of semantic tags in the transportation facility have localization effects in different directions, it is possible to first determine a valid localization direction of a matching semantic entity in the global semantic map, and to correct attribute information (e.g., position, distance, shape, height, etc.) of the matching semantic entity in the local semantic map in the valid localization direction, thereby updating the local pose information of the mobile device.
It should be noted that the first example and the second example can be implemented separately or in combination, and in the case of combination, the execution order of the first example and the second example does not constitute a limitation of the present application. However, preferably, since the first example is matched based on only a single frame or a few frames of the local semantic map, which requires a small amount of computation, and the second example is matched based on the track local semantic map, which requires a large amount of computation, the matching process of the first example may be performed first, and then the matching process of the second example may be performed.
Therefore, when the matching confidence obtained based on a single frame or a few frames of local semantic maps is not high, the track semantic map is used for carrying out overall topological measurement semantic matching, mismatching noise can be eliminated, the positioning precision is further improved, and the previously obtained local pose information is corrected. After obtaining a more accurate local positioning, the mobile device 100 may also perform movement control based further on the result.
In addition, the pose information determination method according to the embodiment of the present application may further include one or more additional steps.
Fig. 8 illustrates a flow chart of additional steps of a pose information determination method according to an embodiment of the present application.
As shown in fig. 8, the pose information determination method according to the embodiment of the present application may further include:
in step S170, a movement control strategy for controlling the movable device to enable it to achieve a predetermined movement goal in the current movement environment is generated according to the global semantic map and the updated local pose information.
Under the condition that the confidence coefficient of the global semantic map matching and the local semantic map matching is high enough (positioning is successful), corresponding movement decision planning can be carried out according to the updated local pose information, for example, so as to achieve the purposes of automatic driving/auxiliary driving, such as planning a path in advance, reducing oil consumption, avoiding accidents and the like.
In step S180, in response to no match to the local semantic map in the global semantic map, generating a movement control policy for controlling the movable device to enable it to achieve a predetermined movement goal in the current mobile environment from the non-semantic digital map of the current mobile environment and the original local pose information.
In the case where high-precision positioning cannot be obtained without matching with a global semantic map, the mobile device 100 may simply rely on a local sensing system in conjunction with a coarse-precision positioning map (e.g., an existing digital map) to perform local low-speed path planning and control, such as determining a target road to be traveled, performing planning control based on a feasible region obtained by a local mobile terminal, performing planning control based on a feasible road, performing lane change or cruising based on an obtained real-time accurate lane line.
Still further, it would be clearly desirable if semantic map matching could be further compromised while ensuring that the mobile device 100 safely achieves the mobile goal, while generating a mobile control strategy for the mobile device 100.
Therefore, after step S170 or step S180, in step S182, an auxiliary control policy of the mobile device is determined according to the location data and the global semantic map, the auxiliary control policy being used for controlling the mobile device to enable it to actively acquire upcoming semantic entities in the current mobile environment.
In step S184, the mobility control strategy and the auxiliary control strategy are integrated to generate an integrated control strategy.
Under the condition that the confidence coefficient of matching between the global semantic map and the local semantic map is not high enough, namely the positioning confidence coefficient is low, active decision (such as vehicle lane change, orientation adjustment, vehicle-mounted camera focal length orientation adjustment and the like) which does not affect safety, comfort and destinations can be carried out according to the combination of coarse-precision positioning and the obtained high-precision semantic map, so that a video sequence with possible key semantic marks (the positioning semantic marks which play a decision role in the transverse direction and the longitudinal direction at the position) can be actively obtained to carry out detection, tracking and identification and matching with the high-precision semantic map, and the pose precision is improved.
In step S190, the local semantic map is uploaded to a server device.
After obtaining the local semantic map, to implement the crowdsourcing mode, each mobile device may also upload the map to server device 200 (e.g., a cloud) to achieve a technical effect of dynamic update of the map.
For example, the vehicle can upload a map after local integration to the cloud, the overall transmission data volume is small, and an automatic map building process is performed without manual marking. For example, a frame of local semantic map may be generated and then uploaded to the cloud, or a plurality of frames of local semantic maps may be integrated into a track semantic map and then uploaded. For example, a trajectory semantic map may be formed based on time (e.g., at intervals), or may be triggered to be generated based on other conditions (e.g., from entering a road to exiting the road).
Therefore, by adopting the pose information determining method according to the embodiment of the application, the sample data of the current mobile environment in which the mobile equipment moves, which is acquired by the environment sensor, can be received, wherein the sample data comprises position data and image data; determining local pose information for the movable device based at least on the position data; constructing a local semantic map of the current mobile environment according to the local pose information of the mobile device and the image data, wherein the local semantic map comprises semantic entities in the current mobile environment and attribute information thereof, the semantic entities are entities which can influence the movement, and the attribute information indicates physical characteristics of the semantic entities; acquiring a global semantic map of the current mobile environment; matching the local semantic map in the global semantic map; and in response to matching the local semantic map in the global semantic map, updating the local pose information according to a result of the matching. Therefore, the pose information of the movable device can be efficiently and accurately obtained.
Specifically, the embodiments of the present application have the following advantages:
1) the semantic mark characteristics are utilized to improve the positioning accuracy, for example, the lane lines, the road edges and the like of one road can improve the transverse positioning accuracy, the poles, the trees, the signboards and the like can improve the longitudinal positioning accuracy, and the stop lines, the sidewalks and the traffic lights can improve the intersection positioning accuracy;
2) the high-precision global positioning result is obtained by matching the local semantic mark map obtained by the track (high-precision local odometer), for example, a vehicle can completely detect and track a lane line from the beginning of entering a road, so that the vehicle can obtain the deviation of the lane where the vehicle is located relative to the initial lane at any time on the road, and the mismatching of single-frame lane line positioning can be reduced by the information. The local semantic mark map (which is just acquired at this time and has no global coordinates) of the vehicle is matched with the server-side map (which is sequence marked matching), so that the matching and positioning precision of the global map can be improved.
Specific examples
Next, a specific example of a pose information determination method according to an embodiment of the present application will be described. It is assumed that the mobile device is a vehicle and the server device is a cloud server.
In a specific example of the embodiment of the application, after a mobile-end vehicle is started, local mapping and sensing are firstly performed according to local capacity, including detection, tracking, identification and reconstruction of a road surface, a lane line, a road edge, a feasible region, a ground mark, a signboard, a traffic light, a stop line, a pedestrian crossing and the like, and a local relative positioning pose obtained according to a GPS, an IMU, a visual odometer and the like is registered in a local map to obtain a local semantic mark map, so that the vehicle can perform decision control according to the local semantic map and real-time sensing. Meanwhile, according to the GPS position, the vehicle can match the local semantic map with the global semantic map of the cloud, when the matching confidence coefficient exceeds a certain threshold value, the local semantic map can correct the global position to obtain an accurate global pose, and then the vehicle can carry out more complex and consistent global planning control. The matching of the local semantic map and the global semantic map is a dynamic process, the pose confidence degrees in different directions can be improved by different types of semantic marks, and the noise can be further removed by the overall local track semantic map matching.
In the first specific example below, it will be emphasized how the positioning accuracy is improved based on semantic tags in a single frame semantic map.
Fig. 9 illustrates a schematic diagram of a pose information determination scene according to a first specific example of an embodiment of the present application.
Different types of semantic marks in traffic facilities have different positioning functions, wherein the longitudinal positioning of vehicles can be corrected by road signs, signboards, transverse intersections and the like, and the transverse positioning of the vehicles can be corrected by lane lines (and types), road edges and the like, as shown in fig. 9, when the vehicles run to a point A, the vehicles can only obtain a rough positioning position, because the encountered semantic marks are not obvious enough; when the vehicle runs to the point B, the mobile terminal identifies the accurate signboard mark and matches the accurate signboard mark with the cloud semantic map, so that the longitudinal positioning precision of the vehicle is improved, but the transverse precision is still insufficient; when the vehicle runs to the point C, the lane lines and the road edges appear on the road, and the vehicle can be accurately identified and modeled, so that the transverse positioning precision of the vehicle is also corrected, the overall precision is further improved, and a better basis is provided for subsequent decision making.
In the second specific example below, it will be emphasized how the positioning accuracy is improved based on the local trajectory semantic map.
Fig. 10 illustrates a schematic diagram of a pose information determination scene according to a second specific example of the embodiment of the present application.
A single structured semantic sign is easily repeated in the map (for example, there are multiple speed-limiting signs near the GPS coordinates), and thus may cause a deviation in the location, as shown in fig. 10, when a vehicle recognizes a circular sign in the vicinity of the GPS position and the driving road, since there are multiple signs of the same category in the vicinity (existing at position a, position B, and position C at the same time), the same location confidence is obtained at several signs. When a vehicle encounters another type of triangular mark, a circular mark and a triangular mark are respectively arranged in sequence in the local track, and the number of the marks in the distribution form is reduced (only existing at the position B and the position C), so that interference is further eliminated according to matching with the semantic map, and the confidence of possible positions is improved. As the vehicle continues to move forward, when the vehicle drives to a square-shaped signboard, a final positioning posture with high confidence is obtained because only one place on the nearby road has the same distribution form (only exists at the position C) of the categories and the position sequences of the three signboards.
Therefore, the scheme can realize the positioning of the automatic driving vehicle by using simpler and cheaper equipment, the most direct visual Advanced Driving Assistance System (ADAS) system which is provided with equipment such as a camera, a GPS, an IMU and the like and has the functions of detecting, tracking and identifying lane lines, traffic signs and pedestrians and vehicles is compatible with a high-difference GPS or a binocular low-line-number laser radar and other systems. In addition, the scheme realizes positioning based on sensing of the existing semantic mark by common equipment which has a large amount of applications at present, does not need a large amount of infrastructure transformation and matching, and has the advantages of simple support system and cloud system and low manufacturing, storage, processing and transmission cost of the positioning map. Finally, the scheme enables the mobile terminal to have strong intelligence, local mapping and perception decision can be carried out only by depending on the capability of the mobile terminal, and global pose correction is carried out under the condition that the available confidence coefficient of the cloud global map is high, so that the scheme has better adaptability, robustness and instantaneity.
It should be noted that, because the intelligence of the mobile vehicle end is greatly enhanced and utilized in the scheme, when global absolute positioning has interference or the confidence is not high enough, the vehicle can still perform real-time positioning and decision-making according to the local map and real-time perception, and therefore the scheme has better adaptability and robustness.
Exemplary pose information determination apparatus
Fig. 11 illustrates a block diagram of a pose information determination apparatus according to an embodiment of the present application.
As shown in fig. 11, the pose information determination apparatus 300 according to the embodiment of the present application may be applied to the movable device 100, and may include: a sample data receiving unit 310 for receiving sample data of a current moving environment in which the movable device is moving, the sample data including position data and image data, acquired by an environment sensor; a pose information determination unit 320 for determining local pose information of the movable device from at least the position data; a semantic map constructing unit 330, configured to construct a local semantic map of the current mobile environment according to the local pose information of the mobile device and the image data, where the local semantic map includes semantic entities in the current mobile environment and attribute information thereof, the semantic entities are entities that may affect movement, and the attribute information indicates physical characteristics of the semantic entities; a semantic map obtaining unit 340, configured to obtain a global semantic map of the current mobile environment; a semantic map matching unit 350 for matching the local semantic map in the global semantic map; and a pose information updating unit 360 for updating the local pose information according to a result of the matching in response to the local semantic map being matched in the global semantic map.
In one example, the pose information determination unit 320 may determine absolute pose information of the movable device from at least the position data; determining relative pose information of the movable device from at least the image data; and generating the local pose information from the absolute pose information and the relative pose information.
In one example, the semantic map building unit 330 may detect semantic entities in the current mobile environment from the image data; determining attribute information of the semantic entity according to the local pose information and the image data; and constructing the local semantic map according to the semantic entities and attribute information thereof.
In one example, the semantic map building unit 330 may perform detection tracking recognition on the image data; and determining semantic entities in the current mobile environment according to the result of detection tracking identification.
In one example, the semantic map building unit 330 may determine a relative positional relationship between the semantic entity and the removable device from the image data; and determining spatial attribute information of the semantic entity according to the local pose information and the relative position relationship.
In one example, the semantic map acquisition unit 340 may download the global semantic map from a server device according to the location data.
In one example, the semantic map matching unit 350 may parse the global semantic map to determine semantic entities therein and attribute information thereof; and finding semantic entity matching pairs in the global semantic map and the local semantic map according to the attribute information.
In one example, the semantic map matching unit 350 may determine valid semantic entities in the local semantic map, the valid semantic entities being semantic entities whose confidence is greater than or equal to a predetermined threshold; determining an effective positioning direction and a non-effective positioning direction of the effective semantic entities according to the attribute information of the effective semantic entities; performing metric zoom matching on the local semantic map in the non-valid positioning direction; and scaling the result of the matching according to the metric to find the semantic entity matching pair.
In one example, the pose information updating unit 360 may correct the attribute information of the semantic entity in the local semantic map among the semantic entity matching pairs according to the attribute information of the semantic entity in the global semantic map among the semantic entity matching pairs in the non-effective localization direction; and correcting the local pose information according to the attribute information of the semantic entities in the corrected local semantic map.
In one example, the semantic map matching unit 350 may perform topology metric semantic matching with the local semantic map and the global semantic map; and finding the semantic entity matching pair according to the result of the topological metric semantic matching.
In one example, the pose information updating unit 360 may determine a valid localization direction of the semantic entity from attribute information of the semantic entity in the global semantic map among the semantic entity matching pairs; in the effective positioning direction, modifying attribute information of semantic entities in the local semantic map among the semantic entity matching pairs according to the attribute information of the semantic entities in the global semantic map among the semantic entity matching pairs; and correcting the local pose information according to the attribute information of the semantic entities in the corrected local semantic map.
In one example, the pose information determination apparatus 300 according to an embodiment of the present application may further include: and the control strategy generating unit is used for generating the mobile control strategy.
In one example, the control policy generation unit may generate a movement control policy for controlling the movable device to enable it to achieve a predetermined movement purpose in the current movement environment, from the global semantic map and the updated local pose information.
In one example, the control policy generation unit may generate a movement control policy for controlling the movable device to enable it to achieve a predetermined movement purpose in the current movement environment from a non-semantic digital map of the current movement environment and original local pose information in response to no match to the local semantic map in the global semantic map.
In one example, the control policy generation unit may determine an auxiliary control policy of the mobile device based on the location data and the global semantic map, the auxiliary control policy being used to control the mobile device to enable it to actively acquire upcoming semantic entities in the current mobile environment; and synthesizing the mobility control strategy and the supplementary control strategy to generate a synthesized control strategy.
In one example, the pose information determination apparatus 300 according to an embodiment of the present application may further include: and the semantic map uploading unit is used for uploading the local semantic map to the server equipment.
The specific functions and operations of the respective units and modules in the posture information determination apparatus 300 described above have been described in detail in the posture information determination method described above with reference to fig. 1 to 10, and therefore, a repetitive description thereof will be omitted.
It should be noted that the pose information determining apparatus 300 according to the embodiment of the present application may be integrated into the mobile device 100 as a software module and/or a hardware module, in other words, the mobile device 100 may include the pose information determining apparatus 300. For example, the pose information determining apparatus 300 may be a software module in the operating system of the movable device 100, or may be an application program developed for it; of course, the pose information determining apparatus 300 may also be one of many hardware modules of the movable device 100.
Alternatively, in another example, the pose information determining apparatus 300 and the movable device 100 may be separate devices (e.g., servers), and the pose information determining apparatus 300 may be connected to the movable device 100 through a wired and/or wireless network and transmit the interaction information according to an agreed data format.
Exemplary Mobile device
Next, a movable device according to an embodiment of the present application is described with reference to fig. 12.
FIG. 12 illustrates a block diagram of a removable device according to an embodiment of the present application.
As shown in fig. 12, the removable device 100 includes one or more processors 11 and memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the removable device 100 to perform desired functions.
In one example, the removable device 100 may further include: an input device 13 and an output device 14.
The input means 13 may comprise, for example, a keyboard, a mouse, and a communication network and a remote input device connected thereto, etc.
For example, the input means 13 may comprise an environment sensor for acquiring sample data of a current movement environment in which the movable device is moving. For example, the environmental sensor may be an image sensor for capturing image data, which may be a camera or an array of cameras. As another example, the environmental sensor may be a laser sensor, which may be a laser or a laser array, for capturing scan data. As another example, the environmental sensor may also be a motion sensor configured to acquire motion data of the mobile device 100. For example, the motion sensor may be an inertial measurement unit and a motion encoder (including an accelerometer and a gyroscope, etc.) built in the mobile device for measuring motion parameters of the mobile device, such as velocity, acceleration, displacement, etc., to determine the position and orientation (attitude) of the mobile device in a mobile environment, and may also be a built-in magnetometer, etc., to calibrate the accumulated error of the attitude sensor in real time. Thus, more accurate pose estimation can be obtained. Of course, the present application is not limited thereto. The environmental sensor may also be other various devices such as radar. In addition, other discrete environmental sensors may also be utilized to collect this sample data and send it to the removable device 100.
The output device 14 may output various information and the like to the outside (e.g., a user). The output devices 14 may include, for example, speakers, displays, printers, and communication networks and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the removable device 100 relevant to the present application are shown in fig. 12, omitting components such as buses, input/output interfaces, and the like. It should be noted that the components and configuration of the removable device 100 shown in FIG. 12 are exemplary only, and not limiting, and that the removable device 100 may have other components and configurations as desired.
For example, although not shown, the removable device 100 may also include a communication device or the like that may communicate with other devices (e.g., personal computers, servers, mobile stations, base stations, etc.) via a network, such as the internet, wireless local area networks, mobile communication networks, etc., or other technologies, such as bluetooth communication, infrared communication, etc.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatuses, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the pose information determination method according to various embodiments of the present application described in the above-mentioned "exemplary methods" section of this specification.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the pose information determination method according to various embodiments of the present application described in the "exemplary method" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of the methods, devices, apparatuses, devices, and systems referred to in this application are only used as illustrative examples and are not intended to require or imply that the methods, devices, apparatuses, devices, and systems must be performed, connected, arranged, or configured in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
Claims (18)
1. A pose information determination method applied to a movable device includes:
receiving sample data acquired by an environment sensor, the sample data corresponding to a current mobile environment in which the mobile device is moving, the sample data comprising location data and image data;
determining local pose information for the movable device based at least on the position data;
constructing a local semantic map of the current mobile environment according to the local pose information of the mobile device and the image data, wherein the local semantic map comprises semantic entities in the current mobile environment and attribute information thereof, the semantic entities are entities which can influence the movement, and the attribute information indicates physical characteristics of the semantic entities;
acquiring a global semantic map of the current mobile environment;
matching the local semantic map in the global semantic map; and
in response to matching the local semantic map in the global semantic map, updating the local pose information according to a result of the matching.
2. The method of claim 1, wherein determining local pose information of the movable device from at least the position data comprises:
determining absolute pose information of the movable device from at least the position data;
determining relative pose information of the movable device from at least the image data; and
generating the local pose information from the absolute pose information and the relative pose information.
3. The method of claim 1, wherein constructing the local semantic map of the current mobile environment from the local pose information of the movable device and the image data comprises:
detecting semantic entities in the current mobile environment from the image data;
determining attribute information of the semantic entity according to the local pose information and the image data; and
and constructing the local semantic map according to the semantic entities and the attribute information thereof.
4. The method of claim 3, wherein detecting semantic entities in the current mobile environment from the image data comprises:
carrying out detection tracking identification on the image data; and
determining semantic entities in the current mobile environment according to the result of detection tracking identification.
5. The method of claim 3, wherein determining attribute information of the semantic entity from the local pose information and the image data comprises:
determining a relative positional relationship between the semantic entity and the movable device from the image data; and
and determining the spatial attribute information of the semantic entity according to the local pose information and the relative position relation.
6. The method of claim 1, wherein a global semantic map of the current mobile environment is obtained;
downloading the global semantic map from a server device according to the location data.
7. The method of claim 1, wherein matching the local semantic map in the global semantic map comprises:
analyzing the global semantic map to determine semantic entities and attribute information thereof; and
and searching semantic entity matching pairs in the global semantic map and the local semantic map according to the attribute information.
8. The method of claim 7, wherein finding semantic entity matching pairs in the global semantic map and the local semantic map according to attribute information comprises:
determining valid semantic entities in the local semantic map, the valid semantic entities being semantic entities having a confidence level greater than or equal to a predetermined threshold;
determining an effective positioning direction and a non-effective positioning direction of the effective semantic entities according to the attribute information of the effective semantic entities;
performing metric zoom matching on the local semantic map in the non-valid positioning direction; and
and scaling the matching result according to the measurement to find the semantic entity matching pair.
9. The method of claim 8, wherein updating the local pose information according to the result of the matching comprises:
in the non-effective positioning direction, modifying attribute information of semantic entities in the local semantic map among the semantic entity matching pairs according to the attribute information of the semantic entities in the global semantic map among the semantic entity matching pairs; and
and correcting the local pose information according to the attribute information of the semantic entities in the corrected local semantic map.
10. The method of claim 7, wherein finding semantic entity matching pairs in the global semantic map and the local semantic map according to attribute information comprises:
performing topological measurement semantic matching by using the local semantic map and the global semantic map; and
and finding the semantic entity matching pair according to the result of the topological measurement semantic matching.
11. The method of claim 10, wherein updating the local pose information according to the result of the matching comprises:
determining an effective positioning direction of the semantic entity according to the attribute information of the semantic entity in the global semantic map in the semantic entity matching pair;
in the effective positioning direction, modifying attribute information of semantic entities in the local semantic map among the semantic entity matching pairs according to the attribute information of the semantic entities in the global semantic map among the semantic entity matching pairs; and
and correcting the local pose information according to the attribute information of the semantic entities in the corrected local semantic map.
12. The method of claim 1, further comprising:
generating a movement control strategy according to the global semantic map and the updated local pose information, wherein the movement control strategy is used for controlling the movable equipment to enable the movable equipment to achieve a preset movement purpose in the current movement environment.
13. The method of claim 1, further comprising:
in response to no match to the local semantic map in the global semantic map, generating a movement control policy from a non-semantic digital map of the current movement environment and original local pose information, the movement control policy for controlling the movable device to enable it to achieve a predetermined movement goal in the current movement environment.
14. The method of claim 13, wherein the method further comprises:
determining an auxiliary control strategy of the movable device according to the position data and the global semantic map, wherein the auxiliary control strategy is used for controlling the movable device to actively acquire upcoming semantic entities in the current mobile environment; and
and integrating the mobile control strategy and the auxiliary control strategy to generate an integrated control strategy.
15. The method of claim 1, wherein the method further comprises:
and uploading the local semantic map to a server device.
16. A pose information determination apparatus applied to a movable device, the apparatus comprising:
a sample data receiving unit for receiving sample data acquired by an environment sensor, the sample data corresponding to a current movement environment in which the movable device is moving, the sample data including position data and image data;
a pose information determination unit for determining local pose information of the movable device at least from the position data;
a semantic map construction unit, configured to construct a local semantic map of the current mobile environment according to the local pose information of the mobile device and the image data, where the local semantic map includes semantic entities in the current mobile environment and attribute information thereof, the semantic entities are entities that may affect movement, and the attribute information indicates physical characteristics of the semantic entities;
the semantic map acquisition unit is used for acquiring a global semantic map of the current mobile environment;
a semantic map matching unit for matching the local semantic map in the global semantic map; and
a pose information updating unit for updating the local pose information according to a matching result in response to the local semantic map being matched in the global semantic map.
17. A mobile device, comprising:
a processor;
a memory; and
computer program instructions stored in the memory, which, when executed by the processor, cause the processor to perform the method of any of claims 1-15.
18. A computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1-15.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710317975.0A CN107144285B (en) | 2017-05-08 | 2017-05-08 | Pose information determination method and device and movable equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710317975.0A CN107144285B (en) | 2017-05-08 | 2017-05-08 | Pose information determination method and device and movable equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107144285A CN107144285A (en) | 2017-09-08 |
CN107144285B true CN107144285B (en) | 2020-06-26 |
Family
ID=59777874
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710317975.0A Active CN107144285B (en) | 2017-05-08 | 2017-05-08 | Pose information determination method and device and movable equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107144285B (en) |
Families Citing this family (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108226938B (en) * | 2017-12-08 | 2021-09-21 | 华南理工大学 | AGV trolley positioning system and method |
WO2019139243A1 (en) * | 2018-01-15 | 2019-07-18 | 에스케이텔레콤 주식회사 | Apparatus and method for updating high definition map for autonomous driving |
WO2019148311A1 (en) * | 2018-01-30 | 2019-08-08 | 深圳前海达闼云端智能科技有限公司 | Information processing method and system, cloud processing device and computer program product |
US20190293434A1 (en) * | 2018-03-22 | 2019-09-26 | General Motors Llc | System and method for guiding users to a vehicle |
CN108759833B (en) * | 2018-04-25 | 2021-05-25 | 中国科学院合肥物质科学研究院 | Intelligent vehicle positioning method based on prior map |
CN108776474B (en) * | 2018-05-24 | 2022-03-15 | 中山赛伯坦智能科技有限公司 | Robot embedded computing terminal integrating high-precision navigation positioning and deep learning |
CN108803617B (en) * | 2018-07-10 | 2020-03-20 | 深圳大学 | Trajectory prediction method and apparatus |
WO2020010517A1 (en) * | 2018-07-10 | 2020-01-16 | 深圳大学 | Trajectory prediction method and apparatus |
CN109084749B (en) * | 2018-08-21 | 2021-05-11 | 北京云迹科技有限公司 | Method and device for semantic positioning through objects in environment |
CN108801269B (en) * | 2018-08-29 | 2021-11-12 | 山东大学 | Indoor cloud robot navigation system and method |
CN110148170A (en) * | 2018-08-31 | 2019-08-20 | 北京初速度科技有限公司 | A kind of positioning initialization method and car-mounted terminal applied to vehicle location |
CN109186606B (en) * | 2018-09-07 | 2022-03-08 | 南京理工大学 | Robot composition and navigation method based on SLAM and image information |
US10782136B2 (en) | 2018-09-28 | 2020-09-22 | Zoox, Inc. | Modifying map elements associated with map data |
CN110146096B (en) * | 2018-10-24 | 2021-07-20 | 北京初速度科技有限公司 | Vehicle positioning method and device based on image perception |
US11263245B2 (en) * | 2018-10-30 | 2022-03-01 | Here Global B.V. | Method and apparatus for context based map data retrieval |
CN111145248B (en) * | 2018-11-06 | 2023-06-27 | 北京地平线机器人技术研发有限公司 | Pose information determining method and device and electronic equipment |
CN111143489B (en) * | 2018-11-06 | 2024-01-09 | 北京嘀嘀无限科技发展有限公司 | Image-based positioning method and device, computer equipment and readable storage medium |
CN109602338A (en) * | 2018-11-26 | 2019-04-12 | 深圳乐动机器人有限公司 | A kind of method, sweeping robot and floor-mopping robot cleaning ground |
CN109544629B (en) | 2018-11-29 | 2021-03-23 | 南京人工智能高等研究院有限公司 | Camera position and posture determining method and device and electronic equipment |
CN109540175A (en) * | 2018-11-29 | 2019-03-29 | 安徽江淮汽车集团股份有限公司 | A kind of LDW test macro and method |
CN111323029B (en) * | 2018-12-16 | 2022-05-27 | 北京魔门塔科技有限公司 | Navigation method and vehicle-mounted terminal |
CN111323004B (en) * | 2018-12-16 | 2022-05-13 | 北京魔门塔科技有限公司 | Initial position determining method and vehicle-mounted terminal |
CN111351493B (en) * | 2018-12-24 | 2023-04-18 | 上海欧菲智能车联科技有限公司 | Positioning method and system |
CN110147095A (en) * | 2019-03-15 | 2019-08-20 | 广东工业大学 | Robot method for relocating based on mark information and Fusion |
CN109947103B (en) * | 2019-03-18 | 2022-06-28 | 深圳一清创新科技有限公司 | Unmanned control method, device and system and bearing equipment |
WO2020191642A1 (en) * | 2019-03-27 | 2020-10-01 | 深圳市大疆创新科技有限公司 | Trajectory prediction method and apparatus, storage medium, driving system and vehicle |
CN111750882B (en) * | 2019-03-29 | 2022-05-27 | 北京魔门塔科技有限公司 | Method and device for correcting vehicle pose during initialization of navigation map |
CN110068824B (en) * | 2019-04-17 | 2021-07-23 | 北京地平线机器人技术研发有限公司 | Sensor pose determining method and device |
CN110084853A (en) * | 2019-04-22 | 2019-08-02 | 北京易达图灵科技有限公司 | A kind of vision positioning method and system |
CN110069593B (en) * | 2019-04-24 | 2021-11-12 | 百度在线网络技术(北京)有限公司 | Image processing method and system, server, computer readable medium |
CN110060343B (en) * | 2019-04-24 | 2023-06-20 | 阿波罗智能技术(北京)有限公司 | Map construction method and system, server and computer readable medium |
DE102019206036A1 (en) * | 2019-04-26 | 2020-10-29 | Volkswagen Aktiengesellschaft | Method and device for determining the geographical position and orientation of a vehicle |
WO2020223974A1 (en) * | 2019-05-09 | 2020-11-12 | 珊口(深圳)智能科技有限公司 | Method for updating map and mobile robot |
CN111982133B (en) * | 2019-05-23 | 2023-01-31 | 北京地平线机器人技术研发有限公司 | Method and device for positioning vehicle based on high-precision map and electronic equipment |
CN110349211B (en) * | 2019-06-18 | 2022-08-30 | 达闼机器人股份有限公司 | Image positioning method and device, and storage medium |
CN112116654B (en) * | 2019-06-20 | 2024-06-07 | 杭州海康威视数字技术股份有限公司 | Vehicle pose determining method and device and electronic equipment |
CN112215887B (en) * | 2019-07-09 | 2023-09-08 | 深圳市优必选科技股份有限公司 | Pose determining method and device, storage medium and mobile robot |
CN112284399B (en) * | 2019-07-26 | 2022-12-13 | 北京魔门塔科技有限公司 | Vehicle positioning method based on vision and IMU and vehicle-mounted terminal |
CN110909585B (en) * | 2019-08-15 | 2022-09-06 | 纳恩博(常州)科技有限公司 | Route determining method, travelable device and storage medium |
WO2021035471A1 (en) * | 2019-08-26 | 2021-03-04 | Beijing Voyager Technology Co., Ltd. | Systems and methods for positioning a target subject |
CN111936946A (en) * | 2019-09-10 | 2020-11-13 | 北京航迹科技有限公司 | Positioning system and method |
CN112711249B (en) * | 2019-10-24 | 2023-01-03 | 科沃斯商用机器人有限公司 | Robot positioning method and device, intelligent robot and storage medium |
CN110887470B (en) * | 2019-11-25 | 2023-06-09 | 天津大学 | Orientation pose measurement method based on microlens array two-dimensional optical coding identification |
CN111060135B (en) * | 2019-12-10 | 2021-12-17 | 亿嘉和科技股份有限公司 | Map correction method and system based on local map |
CN111220164A (en) * | 2020-01-21 | 2020-06-02 | 北京百度网讯科技有限公司 | Positioning method, device, equipment and storage medium |
CN111427373B (en) * | 2020-03-24 | 2023-11-24 | 上海商汤临港智能科技有限公司 | Pose determining method, pose determining device, medium and pose determining equipment |
CN113296133B (en) * | 2020-04-01 | 2024-03-15 | 易通共享技术(广州)有限公司 | Device and method for realizing position calibration based on binocular vision measurement and high-precision positioning fusion technology |
CN111735451B (en) * | 2020-04-16 | 2022-06-07 | 中国北方车辆研究所 | Point cloud matching high-precision positioning method based on multi-source prior information |
CN111780771B (en) * | 2020-05-12 | 2022-09-23 | 驭势科技(北京)有限公司 | Positioning method, positioning device, electronic equipment and computer readable storage medium |
CN111540023B (en) * | 2020-05-15 | 2023-03-21 | 阿波罗智联(北京)科技有限公司 | Monitoring method and device of image acquisition equipment, electronic equipment and storage medium |
CN111968262B (en) * | 2020-07-30 | 2022-05-20 | 国网智能科技股份有限公司 | Semantic intelligent substation inspection operation robot navigation system and method |
CN114063091A (en) * | 2020-07-30 | 2022-02-18 | 北京四维图新科技股份有限公司 | High-precision positioning method and product |
CN112068172A (en) * | 2020-09-08 | 2020-12-11 | 广州小鹏自动驾驶科技有限公司 | Vehicle positioning method and device |
CN112179359B (en) * | 2020-09-27 | 2022-09-23 | 驭势科技(北京)有限公司 | Map matching method and device, electronic equipment and storage medium |
CN112308810B (en) * | 2020-11-05 | 2022-05-13 | 广州小鹏自动驾驶科技有限公司 | Map fusion method and device, server and storage medium |
CN112685527B (en) * | 2020-12-31 | 2024-09-17 | 北京迈格威科技有限公司 | Method, device and electronic system for establishing map |
CN113295159B (en) * | 2021-05-14 | 2023-03-03 | 浙江商汤科技开发有限公司 | Positioning method and device for end cloud integration and computer readable storage medium |
CN113256712B (en) * | 2021-06-01 | 2023-04-18 | 北京有竹居网络技术有限公司 | Positioning method, positioning device, electronic equipment and storage medium |
CN113587948B (en) * | 2021-07-23 | 2024-09-17 | 中汽创智科技有限公司 | Positioning signal spoofing identification method, device, equipment and storage medium |
CN113936046A (en) * | 2021-11-02 | 2022-01-14 | 北京京东乾石科技有限公司 | Object positioning method and device, electronic equipment and computer readable medium |
CN114296096A (en) * | 2021-12-23 | 2022-04-08 | 深圳优地科技有限公司 | Robot positioning method, system and terminal |
CN114440860B (en) * | 2022-01-26 | 2024-07-19 | 亿咖通(湖北)技术有限公司 | Positioning method, positioning device, computer storage medium and processor |
CN114674307B (en) * | 2022-05-26 | 2022-09-27 | 苏州魔视智能科技有限公司 | Repositioning method and electronic equipment |
US12027041B1 (en) * | 2023-03-19 | 2024-07-02 | Kamran Barelli | Systems and methods for detecting stop sign vehicle compliance |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101000507A (en) * | 2006-09-29 | 2007-07-18 | 浙江大学 | Method for moving robot simultanously positioning and map structuring at unknown environment |
CN101008566A (en) * | 2007-01-18 | 2007-08-01 | 上海交通大学 | Intelligent vehicular vision device based on ground texture and global localization method thereof |
CN103884330A (en) * | 2012-12-21 | 2014-06-25 | 联想(北京)有限公司 | Information processing method, mobile electronic device, guidance device, and server |
CN104764457A (en) * | 2015-04-21 | 2015-07-08 | 北京理工大学 | Urban environment composition method for unmanned vehicles |
CN105841687A (en) * | 2015-01-14 | 2016-08-10 | 上海智乘网络科技有限公司 | Indoor location method and indoor location system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105928505B (en) * | 2016-04-19 | 2019-01-29 | 深圳市神州云海智能科技有限公司 | The pose of mobile robot determines method and apparatus |
-
2017
- 2017-05-08 CN CN201710317975.0A patent/CN107144285B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101000507A (en) * | 2006-09-29 | 2007-07-18 | 浙江大学 | Method for moving robot simultanously positioning and map structuring at unknown environment |
CN101008566A (en) * | 2007-01-18 | 2007-08-01 | 上海交通大学 | Intelligent vehicular vision device based on ground texture and global localization method thereof |
CN103884330A (en) * | 2012-12-21 | 2014-06-25 | 联想(北京)有限公司 | Information processing method, mobile electronic device, guidance device, and server |
CN105841687A (en) * | 2015-01-14 | 2016-08-10 | 上海智乘网络科技有限公司 | Indoor location method and indoor location system |
CN104764457A (en) * | 2015-04-21 | 2015-07-08 | 北京理工大学 | Urban environment composition method for unmanned vehicles |
Also Published As
Publication number | Publication date |
---|---|
CN107144285A (en) | 2017-09-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107144285B (en) | Pose information determination method and device and movable equipment | |
CN107145578B (en) | Map construction method, device, equipment and system | |
JP7432285B2 (en) | Lane mapping and navigation | |
AU2022203622B2 (en) | Crowdsourcing and distributing a sparse map, and lane measurements or autonomous vehicle navigation | |
AU2024201126B2 (en) | Systems and methods for anonymizing navigation information | |
US20210311490A1 (en) | Crowdsourcing a sparse map for autonomous vehicle navigation | |
US11755024B2 (en) | Navigation by augmented path prediction | |
CN112204349B (en) | System and method for vehicle navigation | |
US10248124B2 (en) | Localizing vehicle navigation using lane measurements | |
JP2023539868A (en) | Map-based real world modeling system and method | |
BR112019000918B1 (en) | METHOD AND SYSTEM FOR CONSTRUCTING COMPUTER READABLE CHARACTERISTICS LINE REPRESENTATION OF ROAD SURFACE AND NON-TRANSITIONAL MEDIUM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |