CN111912416B - Method, device and equipment for positioning equipment - Google Patents

Method, device and equipment for positioning equipment Download PDF

Info

Publication number
CN111912416B
CN111912416B CN201910377570.5A CN201910377570A CN111912416B CN 111912416 B CN111912416 B CN 111912416B CN 201910377570 A CN201910377570 A CN 201910377570A CN 111912416 B CN111912416 B CN 111912416B
Authority
CN
China
Prior art keywords
map
road
information
perception
elements
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910377570.5A
Other languages
Chinese (zh)
Other versions
CN111912416A (en
Inventor
付万增
王哲
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201910377570.5A priority Critical patent/CN111912416B/en
Priority to PCT/CN2020/075069 priority patent/WO2020224305A1/en
Priority to KR1020217039850A priority patent/KR20220004203A/en
Priority to JP2021565799A priority patent/JP2022531679A/en
Publication of CN111912416A publication Critical patent/CN111912416A/en
Application granted granted Critical
Publication of CN111912416B publication Critical patent/CN111912416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/49Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled

Abstract

The embodiment of the specification provides a method, a device and equipment for equipment positioning, wherein the method for equipment positioning comprises the following steps: acquiring a perception road image of a road where equipment is located and initial positioning information of the equipment; identifying attribute information of a perceptual road element in the perceptual road image; determining offset information between the perception road elements and map road elements in a map based on a pre-established map according to the attribute information of the perception road elements; and correcting the initial positioning information according to the offset information to obtain the positioning information of the equipment.

Description

Method, device and equipment for positioning equipment
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a method, an apparatus, and a device for device positioning.
Background
In various fields, location is fundamental, indispensable information. Over the years, with the development of location-based services, and the development of communications and internet of things, there have been more and higher demands for positioning, and positioning technology has also been developed. At present, with the development of automatic driving technology and the wide application of robots, higher and higher requirements are placed on positioning accuracy and real-time performance.
Disclosure of Invention
The invention aims at providing a method, a device and equipment for positioning equipment.
In a first aspect, a method for device positioning is provided, the method comprising:
acquiring a perception road image of a road where equipment is located and initial positioning information of the equipment;
identifying attribute information of a perceptual road element in the perceptual road image;
determining offset information between the perception road elements and map road elements in a map based on a pre-established map according to the attribute information of the perception road elements;
and correcting the initial positioning information according to the offset information to obtain the positioning information of the equipment.
In combination with any one of the embodiments provided by the present disclosure, the creating of the map includes:
collecting map road images of roads by a collection vehicle;
identifying attribute information of map road elements in the map road image;
and establishing the map based on the attribute information of the map road elements.
In combination with any one of the embodiments provided by the present disclosure, the creating of the map includes:
obtaining semantic information and position information of map road elements in a high-precision map;
And establishing the map based on the semantic information and the position information of the map road elements.
In combination with any one of the embodiments provided by the present disclosure, determining offset information between the perceptual road element and a map road element in the map according to the attribute information of the perceptual road element and based on a pre-established map includes:
determining map road elements matched with the perception road elements from the map according to the attribute information of the perception road elements;
determining positioning information of paired perception road elements and map road elements in the same equipment coordinate system;
determining a positioning offset between the paired perceived road element and map road element based on the positioning information.
In combination with any one of the embodiments provided by the present disclosure, determining, from the map according to the attribute information of the perceptual road element, a map road element paired with the perceptual road element includes:
searching map road elements in a preset range in the map based on the initial positioning information;
pairing perception road elements in the perception road image with map road elements in the preset range in pairs based on attribute information to obtain a plurality of pairing schemes, wherein at least one perception road element in different pairing schemes is different from the map road elements in the preset range in pairing mode;
Determining a confidence level for each of the pairing schemes;
determining a map road element paired with the perception road element among the plurality of pairing schemes with highest confidence or exceeding a set threshold.
In combination with any one of the embodiments provided by the present disclosure, pairing the perception road elements in the perception road image with the map road elements within the preset range includes:
and setting a null or virtual element in the map road elements to be paired with the perception road elements when the perception road elements in the perception road image cannot determine the paired road elements in the map road elements in the preset range.
In combination with any one of the embodiments provided in the present disclosure, determining the confidence level of each of the pairing schemes includes:
respectively determining the individual similarity of the pairing of each perception road element and each map road element in each pairing scheme;
determining the overall similarity of each perception road element and map road element in each matching scheme;
and determining the confidence of each pairing scheme according to the individual similarity and the overall similarity of each pairing scheme.
In connection with any embodiment provided by the present disclosure, the positioning offset includes a coordinate offset and/or a direction offset;
determining a positioning offset between the paired perceived road element and map road element based on the positioning information, comprising:
sampling pixel points of the perception road elements to obtain a perception sampling point set;
sampling pixel points of the map road elements to obtain a map sampling point set;
determining a rotational translation matrix between sampling points included in the perception sampling point set and the map sampling point set respectively;
and obtaining coordinate offset and direction offset of the perception road element and the map road element based on the rotation and translation matrix.
In combination with any embodiment provided by the present disclosure, the obtaining of the perceived road image of the road where the device is located and the initial positioning information of the device includes:
acquiring the perception acquisition image of the road surface where the equipment is located based on a vision sensor arranged on the equipment;
determining initial positioning information of the device based on a Global Positioning System (GPS) and/or an Inertial Measurement Unit (IMU) arranged on the device.
With reference to any embodiment provided by the present disclosure, after the correcting the initial positioning information according to the offset information to obtain the positioning information of the device, the method further includes:
And fusing the obtained positioning information and the initial positioning information again to obtain the corrected positioning information.
In a second aspect, an apparatus for device localization is provided, the apparatus comprising:
the device comprises an obtaining unit, a processing unit and a processing unit, wherein the obtaining unit is used for obtaining a perception road image of a road where the device is located and initial positioning information of the device;
an identifying unit configured to identify attribute information of a perception road element in the perception road image;
the determining unit is used for determining offset information between the perception road element and a map road element in a map based on the pre-established map according to the attribute information of the perception road element;
and the correcting unit is used for correcting the initial positioning information according to the offset information to obtain the positioning information of the equipment.
In combination with any one of the embodiments provided by the present disclosure, the apparatus further includes a map establishing unit configured to:
collecting map road images of roads by a collection vehicle;
identifying attribute information of map road elements in the map road image;
and establishing the map based on the attribute information of the map road elements.
In combination with any one of the embodiments provided by the present disclosure, the apparatus further includes a map establishing unit configured to:
Obtaining semantic information and position information of map road elements in a high-precision map;
and establishing the map based on the semantic information and the position information of the map road elements.
In combination with any one of the embodiments provided by the present disclosure, the determining unit is specifically configured to:
determining map road elements matched with the perception road elements from the map according to the attribute information of the perception road elements;
determining positioning information of paired perception road elements and map road elements in the same equipment coordinate system;
determining a positioning offset between the paired perceived road element and map road element based on the positioning information.
In combination with any embodiment provided by the present disclosure, when the determining unit is configured to determine, from the map, a map road element paired with the perceived road element according to the attribute information of the perceived road element, the determining unit is specifically configured to:
searching map road elements in a preset range in the map based on the initial positioning information;
pairing perception road elements in the perception road image with map road elements in the preset range in pairs based on attribute information to obtain a plurality of pairing schemes, wherein at least one perception road element in different pairing schemes is different from the map road elements in the preset range in pairing mode;
Determining a confidence level for each of the pairing schemes;
determining a map road element paired with the perception road element among the plurality of pairing schemes with highest confidence or exceeding a set threshold.
In combination with any embodiment provided by the present disclosure, when the determining unit is configured to match the perceptual road elements in the perceptual road image with the map road elements within the preset range, the determining unit is further configured to:
and setting a null or virtual element in the map road elements to be paired with the perception road elements when the perception road elements in the perception road image cannot determine the paired road elements in the map road elements in the preset range.
In combination with any one of the embodiments provided in the present disclosure, the determining unit, when configured to determine the confidence level of each of the pairing schemes, is specifically configured to:
respectively determining the individual similarity of the pairing of each perception road element and each map road element in each pairing scheme;
determining the overall similarity of each perception road element and map road element in each matching scheme;
and determining the confidence of each pairing scheme according to the individual similarity and the overall similarity of each pairing scheme.
In connection with any embodiment provided by the present disclosure, the positioning offset comprises a coordinate offset and/or a direction offset;
the determining unit, when configured to determine a positioning offset between the paired perceived road element and map road element based on the positioning information, is specifically configured to:
sampling pixel points of the perception road elements to obtain a perception sampling point set;
sampling pixel points of the map road elements to obtain a map sampling point set;
determining a rotational translation matrix between sampling points included in the perception sampling point set and the map sampling point set respectively;
and obtaining coordinate offset and direction offset of the perception road element and the map road element based on the rotation and translation matrix.
In combination with any one of the embodiments provided by the present disclosure, the obtaining unit is specifically configured to:
acquiring the perception acquisition image of the road surface where the equipment is located based on a vision sensor arranged on the equipment;
determining initial positioning information of the device based on a Global Positioning System (GPS) and/or an Inertial Measurement Unit (IMU) arranged on the device.
In combination with any one of the embodiments provided in this disclosure, the correction unit is further configured to:
And fusing the obtained positioning information and the initial positioning information again to obtain the corrected positioning information.
In a third aspect, an apparatus is provided that includes a memory for storing computer instructions executable on a processor, and a processor for performing positioning based on any of the methods of the present disclosure when the computer instructions are executed.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which program, when executed by a processor, implements any of the methods for device positioning of the present disclosure.
According to the positioning method, the positioning device and the positioning equipment in one or more embodiments, in the positioning process, the initial positioning information of the equipment is corrected through the pre-established map, so that the positioning information of the equipment is obtained, the hardware cost of the equipment required for realizing the positioning of the equipment can be reduced, the operation speed of the positioning of the equipment is increased, the requirement of real-time equipment positioning is met, and/or the positioning precision is high.
Drawings
In order to more clearly illustrate one or more embodiments or technical solutions in the prior art in the present specification, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in one or more embodiments of the present specification, and other drawings can be obtained by those skilled in the art without inventive exercise.
Fig. 1 is an example of a positioning method provided in at least one embodiment of the present disclosure;
fig. 2A is an example of a method for pre-establishing a map according to at least one embodiment of the present disclosure;
fig. 2B is an example of another method for pre-establishing a map according to at least one embodiment of the present disclosure;
FIG. 3A is an input road image for identifying road element semantic information provided by at least one embodiment of the present disclosure;
FIGS. 3B and 3C illustrate road elements and semantic information identified for the input road image shown in FIG. 3A;
fig. 4A is a schematic diagram of a pixel coordinate system of a camera provided in at least one embodiment of the present disclosure;
fig. 4B is a schematic diagram of a GPS device coordinate system provided by at least one embodiment of the present disclosure;
fig. 4C is a schematic diagram of coordinate system conversion provided by at least one embodiment of the present disclosure;
fig. 5A is a road image for coordinate system conversion provided by at least one embodiment of the present disclosure;
FIG. 5B is an effect diagram of the lane lines identified in FIG. 5A;
FIG. 5C is an effect diagram of the lane lines identified in FIG. 5B being transformed into a GPS device coordinate system;
fig. 6 is an example of a method of determining offset information between a perceptual road element and a map road element provided by at least one embodiment of the present disclosure;
Fig. 7 is an example of a method of determining map road elements paired with perceptual road elements provided by at least one embodiment of the present disclosure;
fig. 8 is a schematic diagram of a pairing scheme provided by at least one embodiment of the present disclosure;
fig. 9 illustrates an example of a method of determining a localization offset method between paired perceived road elements and map road elements provided by at least one embodiment of the present disclosure;
FIG. 10 is a schematic diagram of a closest point iteration method provided by at least one embodiment of the present disclosure;
fig. 11A is a schematic structural diagram of a positioning device according to at least one embodiment of the present disclosure;
fig. 11B is a schematic structural diagram of a positioning device according to at least one embodiment of the present disclosure;
fig. 11C is a schematic structural diagram of a positioning device according to at least one embodiment of the present disclosure;
fig. 12 is a block diagram of an apparatus provided in at least one embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in one or more embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by one of ordinary skill in the art from one or more embodiments of the disclosure without making any creative effort shall fall within the scope of protection of the disclosure.
At least one embodiment of the present disclosure provides a positioning method, as shown in fig. 1, where fig. 1 shows a flow of the positioning method, and the positioning method may include:
in step 101, a perceived road image of a road on which the device is located and initial positioning information of the device are obtained.
In this step, the device may be the target subject itself for positioning, the device may be configured with a vision sensor to acquire a perception road image, and may be configured with a position sensor to obtain initial positioning information. Devices may include, but are not limited to, the following types of devices: the robot may be any type of robot, and may be an industrial robot, a service robot, a toy robot, an educational robot, and the like, without limitation to the present disclosure.
For the convenience of distinguishing description, road images acquired by an image sensor arranged on equipment in real time on a road where the equipment is located are not called perception road images, and road elements identified based on the perception road images are not called perception road elements; as will be referred to later, a road image used for map building or in a map building process is not referred to as a map road image, and a road element identified based on the map road image is not referred to as a map road element.
The position sensor may include at least one of: global positioning system GPS, inertial measurement unit IMU, etc.; the vision sensor may include at least one of: cameras, video cameras, and the like. It should be understood by those skilled in the art that the vision sensor and the position sensor are not limited to the above.
The initial positioning information of the device may be a synchronized positioning information obtained for each frame of the perceptual road image. It may be GPS location information, or IMU location information, or a fusion of the GPS location information and the IMU location information.
The fused information is a more reliable positioning result obtained based on the GPS positioning information and the IMU positioning information. The GPS positioning information and the IMU positioning information can be obtained through Kalman filtering, or the GPS positioning information and the IMU positioning information can be subjected to mean value calculation or weighted average calculation.
In step 102, attribute information of a perceptual road element in the perceptual road image is identified.
In this step, the perceptual road elements may be road elements acquired in real time. The road element may include an identification associated with the road, and may include at least one or more of: various types of lane lines, stop lines, turn lines, road edge lines on roads, and traffic signs, traffic lights, street lights, and the like provided beside or on roads. Various types of lane lines may include, but are not limited to, white solid line lane lines, yellow dashed line lane lines, left side edge lane lines, right side edge guide lines, and the like; various types of traffic signs may include, but are not limited to, a slow traffic sign, a no-stop traffic sign, a speed-limit traffic sign, and the like. It will be appreciated by those skilled in the art that the road elements are not limited to those described above.
The attribute information of the perceptual road element may include one or more information related to the perceptual road element, such as semantic information, position information, shape information, and the like of the road element.
The semantic information of a road element may be a meaning represented by the road element and information to be expressed by the road element, for example, when a line on the road is detected in the acquired road image, the line may be determined to be a stop line, a lane line, or the like according to its position on the road, width, length, or the like relative to the road, since the lane line may be subdivided into many types, the lane line is basic semantic information, and specific semantic information thereof may be further determined according to the position of the line and the form of the line, for example, a left edge lane line, a white solid lane line, or the like; for the traffic board, the slow traffic board and the no-stop traffic board can be specific semantic information of the road element. Those skilled in the art will appreciate that the particular form of expression of the semantic information of a road element does not affect the implementation of the disclosed method.
In step 103, offset information between the perceived road element and a map road element in the map is determined according to the attribute information of the perceived road element and based on a pre-established map.
The pre-established map may be a semantic map, a high-precision map, etc., but is not limited thereto, and may be other types of maps.
The road elements displayed in the above-described map are referred to as map road elements. Similar to the perceptual road element, the map road element may include an identification associated with the road, and may include at least one of: lane lines, stop lines, turn lines on roads, and traffic signs, traffic lights, street lights, etc., disposed beside or in front of the roads. The perceived road elements may be all of the same type as the map road elements, or may be partially the same type.
A pre-established map for a road includes all or most of the road elements of the road segment. The perceived road image acquired in the localization process is a local area image of the road segment, so that the road elements identified in the perceived road image are transformed onto the map, corresponding to a part of the road elements in the map.
Ideally, the perceived road elements should be coincident with map road elements in the map. Such a coincidence may refer to a coincidence of a perceived road element and a map road element in the same coordinate system. However, since the initial positioning information obtained when performing positioning has a positioning deviation or is insufficient in positioning accuracy, particularly in the case where the accuracy of a hardware device having a positioning function on the device is not high or low, the possibility of deviation of the initial positioning information is high, so that the perception of the road element may be inaccurate. Accordingly, it is possible to determine offset information of the perceived road element from the map road element based on the attribute information of the perceived road element and the attribute information of the map road element, thereby correcting the initial positioning information.
In step 104, the initial positioning information is corrected according to the offset information, so as to obtain the positioning information of the device.
Since the offset between the perception road element and the map road element is caused by the offset between the initial positioning information and the actual positioning, the initial positioning information can be corrected according to the offset between the perception road element and the map road element to obtain the final device positioning information.
According to the attribute information of the perception road elements in the perception road image and based on the pre-established map, the offset information between the perception road elements and the map road elements in the map is determined, and the initial positioning information of the equipment is corrected.
The embodiment provides a set of equipment positioning solution which is based on vision and is matched with a pre-established map to realize low cost, low operation complexity, good real-time performance and/or high positioning precision, and can be widely applied to the fields of intelligent equipment driving and the like.
In other words, the present embodiment corrects the initial positioning information based on the attribute information of the road element and the pre-established map, that is, the accuracy requirement for the initial positioning information of the device is low, so that the hardware cost required by the device positioning is reduced; and the requirement on positioning accuracy is met at the same time through the correction of the initial information. In addition, by calling the pre-established map, compared with the real-time map establishment in the positioning process, the storage space of the equipment is saved, the requirement on hardware resources is further reduced, the algorithm is low in complexity and good in real-time performance, and the real-time requirement of positioning can be met.
In the following description, the positioning method will be described in more detail. In the following, the positioning method is described as applied to an autonomous vehicle. The automatic driving vehicle may be provided with a vision sensor to acquire a road image in real time, a position sensor to obtain initial positioning information, and a processor and a memory for storing a preset map, wherein the processor processes the acquired sensing road image and data acquired from the map to acquire offset information between the sensing road element and the map road element in real time during automatic driving, and corrects the initial positioning information according to the offset information to realize automatic driving using real-time, more accurate positioning information.
It is to be understood that other scenarios may apply the positioning method as well. For example, the map generation system can generate a high-precision map by using a more accurate positioning result by applying the method to a map acquisition data vehicle; for another example, a mobile robot system can realize a highly accurate self-positioning function by applying the method to the mobile robot system.
The following describes in detail the positioning process during automatic driving of the vehicle.
To achieve real-time positioning, a pre-established map may be acquired.
The pre-established Map may be a Semantic Map (Semantic Map). The semantic map is an equipment-oriented map with a computer-readable format (such as an XML format), and has the advantages of simplicity, small data volume, high calling speed and the like compared with a high-precision map usually used for equipment intelligent driving.
The map building method is shown in fig. 2A, and may include:
in step 201, a map road image of a road is captured via a capture car.
The collection vehicle is provided with a vision sensor for collecting map road images. The vision sensor may include at least one of: cameras, video cameras, webcams, and the like. In order to enable the built map to achieve higher precision, the visual sensor configured by the acquisition vehicle can be a high-precision visual sensor, so that the map road image with high definition and high precision can be acquired. In the positioning process, a vision sensor for acquiring and sensing road images can adopt a sensor with relatively low precision.
The collection vehicle may also be configured with a high precision position sensor to more accurately obtain positioning information for the collection vehicle. In the positioning process, the position sensor for acquiring the initial positioning information may adopt a sensor with lower positioning, or utilize the existing position sensor of the device.
In step 202, attribute information of map road elements in the map road image is identified.
The attribute information of the map road element may include semantic information, position information, shape information, and the like.
The attribute information can be obtained by using a trained neural network model for detecting road elements.
The neural network model may be trained by road images with labeling information (not referred to as sample road images), where the road elements in the sample road images have labeling information, and the labeling information may be attribute information of the sample road elements, and may include, but is not limited to, one or more of the following: semantic information, shape information, location information, and the like.
The neural network model is trained through the sample road image, so that the model has the capability of identifying the attribute information of the road elements in the input road image. For a map road image input to the neural network model, attribute information of map road elements in the image may be output.
The neural network model is able to identify classes of road elements, depending on the type of sample road elements used in the training process. The model may be trained with more types of sample road elements to have a higher recognition capability.
Fig. 3A to 3C show effect diagrams of semantic information for identifying road elements. Fig. 3A is a road image of an input neural network model, which may be a perception road image, a map road image, or other road images; fig. 3B shows a road element identified by the neural network model, as shown by a horizontal thick solid line in the figure, and its semantic information is obtained as a "stop line" (stopline), which is marked at the upper left of the picture; fig. 3C shows road elements identified by the neural network model, as indicated by the thick solid lines in the vertical direction in the figure, and the basic semantic information of each line is obtained, as well as the specific semantic information. The basic semantic information is 'lane line' (lane), and the specific semantic information is respectively (from left to right): the "white solid lane line" (white solid line), and "right edge lane line" (right edge) are marked on the upper left of the picture.
It should be understood by those skilled in the art that the identification method of the map road element attribute information is not limited to the above, and may be obtained by other identification methods.
In step 203, the map is built based on attribute information of the map road elements.
In one example, a map is built based on semantic information, location information, and shape information of map road elements. The map may be referred to as a semantic map.
In the vehicle positioning process, the above-described attribute information of the map road element can be obtained by calling the map.
In the embodiment, the road image is acquired by the high-precision vision sensor equipped in the acquisition vehicle, the map is built by identifying the attribute information of the road elements in the image and acquiring the positioning information of the acquisition vehicle by the high-precision position sensor, the built map has high precision and small data volume, and the storage space of the equipment is saved.
In one example, in the case where the position information of the identified map road element is position information in a pixel coordinate system of a visual sensor (e.g., a camera), since the map requires specific versatility, it can be converted into a latitude and longitude coordinate system by the following method.
In the following, a description will be given taking a visual sensor as a camera and a position sensor as a GPS as an example. Those skilled in the art will appreciate that the conversion method is equally applicable to other vision sensors and position sensors.
The position information of the map road element in the pixel coordinate system of the camera may be referred to as first map position information. Firstly, converting the first map position information into a GPS device coordinate system to obtain second map position information. The coordinate transformation may be implemented using a homography matrix between the camera and the GPS, which may be obtained by external reference between the calibrated camera and the GPS.
The pixel coordinate system of the camera is shown in fig. 4A, wherein O ' -x ' -y ' is the pixel coordinate system of the camera in the figure.
The GPS device coordinate system is a coordinate system parallel to the ground plane with the GPS as the center. In the case of a GPS device installed in an autonomous vehicle, the GPS device coordinate system is a right-handed cartesian coordinate system with the direction of the vehicle head being the positive x-axis direction, as shown in fig. 4B. Since the coordinate information on the ground level for positioning is not required to be used, the height information on the z-axis is not involved in converting the first map position information of the map road element to the GPS device coordinate system.
In one example, the GPS operates in an RTK (Real-Time Kinematic) mode to obtain better positioning results.
Fig. 5A to 5C show effect diagrams of conversion of road elements from the pixel coordinate system of the camera to the GPS device coordinate system. The road element may be a map road element or a perceptual road element. Fig. 5A is an original image of a road image; fig. 5B is an effect diagram of lane lines identified in a road image, in which three thick solid lines in the longitudinal direction are lane lines, and the basic semantics "lane lines" (lane lines) of the three identified lane lines are marked at the upper left part of the image, and the specific semantics "left edge lane lines" (left edge), "white dot line)," right edge lane lines "(right edge) of the three lane lines from left to right; fig. 5C shows an effect diagram of converting the lane lines into the GPS device coordinate system, where three solid lines in the longitudinal direction are the lane lines in the GPS device coordinate system.
Next, the current positioning information of the GPS may be converted into a latitude and longitude coordinate system to obtain third map position information.
The GPS positioning information is synchronized with the map road image collected by the camera. Here, synchronization means that one synchronized positioning information is obtained from the GPS for each frame of map road image.
The GPS positioning information describes a unique position and orientation of the GPS on the earth, which correspond to the position (x, y) of the origin of the GPS device coordinate system in the geodetic coordinate system and the angle θ between the x-axis of the GPS device coordinate system and the geodetic east direction, respectively, which can be expressed as (x, y, θ).
In the case where the second map position information of the map road element in the GPS device coordinate system is known, the map road element may be converted from the GPS device coordinate system to a latitude and longitude coordinate system, such as the WGS84 coordinate system, by the GPS positioning information synchronized with the frame of road image from which the map road element is acquired. After the conversion, third map position information is obtained.
In one example, map road elements may first be converted from the GPS device coordinate system to the global projection abscissa mercator coordinate system (UTM coordinate system), and since both coordinate systems are two-dimensional right-handed cartesian coordinate systems, conversion between coordinate systems may be achieved with only rotation and translation operations. The rotation angle is the included angle theta of the headstock facing the east-ward direction, and the translation amount is the longitude and latitude positioning information (x, y).
Fig. 4C shows a coordinate system transformation diagram, in which the pixel coordinate system of the camera, the GPS device coordinate system, and the UTM coordinate system are arranged in order from left to right.
And after the map road elements are converted into the UTM coordinate system, converting the map road elements into a longitude and latitude coordinate system from the UTM coordinate system, and obtaining third map position information.
In the embodiment, the map is established based on the global positioning information of the map road elements, the established map has universality due to uniqueness of the global positioning information, and the positioning method can be widely applied because the position sensor used by the object using the map is not particularly limited.
The map may also be created by the method shown in fig. 2B, as shown in fig. 2B, the method comprising:
in step 211, semantic information and position information of map road elements in the high-precision map are acquired.
Compared with the semantic map, the high-precision map can also comprise more detailed information about road elements, such as road curvature, heading, gradient and the like of the road elements of the map, and further such as association information between the road elements and the like, besides attribute information contained in the semantic map.
By using the existing high-precision map, semantic information and position information of required map road elements or other information can be extracted from the map.
In step 212, the map is built based on the semantic information and the location information of the map road elements.
The map created in this step may also be referred to as a semantic map.
In the embodiment, the semantic map is established by using the existing high-precision map, so that the cost is saved, and compared with the requirement of high storage space of the high-precision map, the semantic map established by the embodiment has small data volume and saves the storage space of equipment.
The above describes a process of creating a map used when the autonomous vehicle performs real-time positioning, and the following describes a real-time positioning process of the autonomous vehicle.
First, a perceived road image acquired in real time using a vision sensor is obtained, and initial positioning information of a position sensor equipped on a vehicle, such as a GPS, an IMU, and the like, is obtained. The initial positioning information may be synchronous positioning information obtained for each frame of the perceptual road image.
Attribute information of the perceived road element in the perceived road image is then identified. This step, similar to that described in step 202, may include: and inputting the perception road image into a pre-trained neural network model to obtain attribute information of the perception road elements in the perception road image. The attribute information may include semantic information and position information, or may include semantic information, position information, shape information, and the like.
The neural network model and the model used for identifying the map road elements can be the same, and the same can mean that the network structure and parameters of the model are the same, and can also mean that the model is trained through the same sample road image; the road elements can be different, for example, the network structure and parameters are different, or training is performed through different sample road images, but the road elements that can be identified by the two models have an intersection, and the larger the intersection is, the better the effect obtained by applying the positioning method is.
After identifying attribute information of a perceived road element, offset information between the perceived road element and a map road element in the map may be determined from the pre-established map based on the attribute information.
Fig. 6 illustrates a method of determining offset information between a perceptual road element and a map road element, which may include, as shown in fig. 6:
in step 601, a map road element paired with the perception road element is determined from the map according to the attribute information of the perception road element.
For a perceived road image acquired in real time, if a map has been previously established for the road, then the perceived road elements on the perceived road image may be obtained on the map as map road elements paired therewith. That is, for a perceptual road element, if it is not misrecognized, nor newly appeared after map creation or latest update, a map road element can be found corresponding to it, usually on a map.
In step 602, position information of the paired perceived road element and map road element in the same device coordinate system is determined.
Since the comparison of the positions needs to be performed in the same coordinate system, if the obtained position information of the perceptual road element and the position information of the map road element are not in the same coordinate system, it is necessary to convert both into the same coordinate system.
Since the offset information of the perception road element and the map road element is obtained for correcting the initial positioning information, and the initial positioning information is the positioning information in the device coordinate system, the position information of the perception road element and the map road element can be converted into the device coordinate system, so that the real-time correction of the initial positioning information can be conveniently carried out.
In the case where the position information of the map road element is the third map position information in the latitude and longitude coordinate system, the third map position information needs to be converted into the device coordinate system, which will be described below by taking the GPS device coordinate system as an example.
And converting the third map position information from the longitude and latitude coordinate system to the GPS equipment coordinate system, wherein the process is the inverse process of converting the second map position information from the GPS equipment coordinate system to the longitude and latitude coordinate system.
The process can be divided into two steps:
first, converting third map location information from a longitude and latitude coordinate system (e.g., WGS84 coordinate system) to a UTM coordinate system;
then, the map road elements are converted from the UTM coordinate system to the GPS device coordinate system using the initial positioning information of the GPS. For an autonomous vehicle, this step can be obtained by rotating the included angle θ of the vehicle head in the east direction and then translating the longitude and latitude positioning information (x, y) of the GPS.
Those skilled in the art will appreciate that for other position sensors, the transformation from the latitude and longitude coordinate system to the device coordinate system may be performed according to its specific transformation rules.
In step 603, a positioning offset between the paired perceived road element and map road element is determined based on the location information.
After the position information of the perception road element and the map road element matched with the perception road element in the same equipment coordinate system is obtained, the positioning offset between the perception road element and the map road element can be determined based on the positions of the perception road element and the map road element.
In the embodiment, the paired perception road elements and map road elements are converted into the same equipment coordinate system, and the positioning offset between the perception road elements and the map road elements is determined by using the position information of the perception road elements and the map road elements, so that the initial positioning information in the same equipment coordinate system can be directly corrected by the positioning offset, and the real-time positioning is favorably realized.
Fig. 7 illustrates a method of determining map road elements paired with perceptual road elements, which may include, as shown in fig. 7:
in step 701, map road elements within a preset range are searched for in the map based on the initial positioning information.
The initial positioning information is the position information of the device itself, for example, in the case of an autonomous vehicle, that is, the position information of the vehicle. By the initial positioning information, the position of the vehicle on the map can be determined, so that map road elements within a set range, namely map road elements near the vehicle, can be found in the map.
Since the perceived road image is obtained by a vision sensor equipped on the vehicle, the perceived road element of the perceived road image is a road element located near the vehicle when the vehicle is positioned. Thus, by finding map road elements near the vehicle on the map, it is the most likely, and also the fastest way, to be able to find map road elements that are paired with perceptual road elements.
The preset range can be specifically set according to requirements, for example, if the matching precision is high, the range can be set to be relatively large, and more map road elements can be obtained to be matched with perception road elements in the subsequent process; if the real-time requirement is high and it is desired that the matching is faster, the range can be set relatively small. For example, the preset range may be a range of 2 times to 5 times that of the visual range of the visual sensor from the initial positioning information as a central point on the map, thereby weighing matching speed and accuracy.
For example, if the visual sensor has a visual range of 60m and the initial positioning error is 10m, the preset range may be set to (60+10) × 2. That is, in this case, the preset range may be a rectangular frame of 140m × 140m centered on the initial positioning.
In step 702, pairwise pairing is performed on the perception road elements in the perception road image and the map road elements in the preset range based on the attribute information, so as to obtain multiple pairing schemes.
Each perception road image in the perception road images and each map road element in a preset range can be paired pairwise in an enumeration mode, and various different pairing schemes are obtained.
The different matching schemes may be different matching modes of at least one perception road element and the map road element in the preset range.
For example, the perceptual road elements in the perceptual road image include a1, a2, …, aM, and the map road elements in the preset range include b1, b2, …, bN, where M, N are positive integers, and N is greater than or equal to M. That is, the number of map road elements is greater than, or at least equal to, the number of perceived road elements.
The perception road elements (a1, a2, …, aM) and the map road elements (b1, b2, …, bN) are paired pairwise, and each pairing scheme obtained is a set of two-tuples, and each two-tuple (ai, bj) is a pairing mode of the road elements. In the doublet (ai, bj), i is less than or equal to M, i can be any integer within the range of [1, M ]; j is less than or equal to N, j can be any integer in the range of [1, N ]. Also, in the pairing scheme, all of the perceptual road elements (a1, a2, …, aM) are required to be paired, and elements for which a pairing target is not found may be included in the map road elements (b1, b2, …, bN).
In different pairing schemes, at least one set of tuples (ai, bj) is different.
In one example, pairwise pairing of perceptual road elements with map road elements may be achieved through a bipartite graph model.
Firstly, constructing a bipartite graph model based on perception road elements and map road elements: abstracting each perception road element into a point in a perception road image, wherein all perception road elements form a perception point set; map road elements in the map are also abstracted into one point, and all the map road elements form a map point set.
In response to the situation that multiple road elements with the same semantic meaning exist in the perceived road image, for example, multiple lane lines exist, the perceived road elements with the same semantic meaning may be sorted in order from the left side to the right side of the vehicle, the map road elements with the same semantic meaning in the map may also be sorted by using a similar method, and the points in the formed corresponding point set are sequentially arranged according to the sorting of the road elements.
And connecting the perception point set and the map point set by using edges, wherein each edge represents the pairing relation between one perception road element and one map road element. Different connection modes generate different pairing schemes, and each obtained pairing scheme is an edge set.
In one example, a reasonable pairing scheme can be obtained in all pairing schemes by using a bipartite graph matching method based on the model.
The method comprises the following steps: among all edge sets, as many edge sets as possible are selected that do not intersect (do not intersect). The term "disjoint" as used herein means disjoint in the case where two edges have no common point, and the two vertices of one edge have a greater number in the set of points than the two vertices of the other edge, and thus may be understood as disjoint in a physical sense.
Edge sets with a number of disjoint edges greater than a set proportion or a set threshold may be referred to as legitimate edge sets, i.e., a legitimate pairing scheme is obtained, such as shown in fig. 8.
And by screening out a reasonable pairing scheme and then carrying out confidence calculation, the calculation amount in the subsequent process is reduced.
In step 703, a confidence level for each of the pairing schemes is determined.
The confidence is an evaluation index for the matching condition between the perception road element and the map road element in a matching scheme. In a pairing scheme, semantic information consistency of each perception road element and a map road is higher, the number of matched pairs is larger, and the confidence of the pairing scheme is higher.
In one example, the confidence level for each pairing scheme may be determined by:
first, the individual similarity of each pair of perceptual road elements and map road elements in each pairing scheme is determined, respectively.
The individual similarity may refer to a degree of similarity of attribute information of two elements for each binary pair in the pairing scheme. For example, the similarity of semantic information, the similarity of position information, the similarity of shape information, and the like may be included.
Taking the lane line as an example, the individual similarity between the sensing lane line and the map lane line can be calculated by the following formula, wherein the sensing lane line can refer to the lane line in the sensing road image, and the map lane line can refer to the lane line in the map.
Weight(i,j)=-Distance(i,j)+O type (i,j)*LaneWidth+O edgetype (i,j)*LaneWidth (1)
Wherein Weight (i, j) represents the individual similarity between the ith (counted from left to right, the same below) sensing lane line and the jth map lane line, and can also be called as a Weight;
Distance (i, j) represents the Distance between the ith perception lane line and the jth map lane line, the lane line is abstracted into line segments, and the Distance can be calculated in the mode of Euclidean Distance from the line segment to the line segment, namely the median of the Distance from two endpoints on one line segment to the other line segment, namely the average value;
LaneWidth represents the lane width, i.e., the width between two lane lines;
O type (i, j) if and only if the lane line attribute of the ith perception lane line is the same as that of the jth map lane line is 1, otherwise 0; wherein, the lane line attribute may include lane line color, line type, etc., such as yellow solid line, white dotted line;
O edgetype (i, j) 1 if and only if the edge lane line attribute of the ith perception lane line is the same as that of the jth map lane line, and 0 otherwise; the edge lane line attribute indicates whether the lane line belongs to the edge of the road.
In the above formula, Distance (i, j) is used to calculate the similarity of position information between the perceived lane line and the map under lane, lanewadth is used to calculate the similarity of shape information between them, O type (i, j) and O edgetype And (i, j) calculating the semantic information similarity between the two.
It will be appreciated by those skilled in the art that other reasonable formulas may be set for calculating the individual similarity between other road elements.
After the individual similarity is determined, the overall similarity of each perceived road element and map road element pair in each of the pairing schemes is next determined.
The overall similarity may be an overall evaluation of the similarity of the attribute information of all the binary pairs in one pairing scheme. The attribute information may include location information and semantic information.
For the overall similarity of the position information, the variance of the distances of the two elements in all the binary pairs can be used for representation. The smaller the variance, the closer the distance between two elements in all binary pairs is, the higher the overall similarity of the position information.
For the overall similarity of the semantic information, the similarity of the semantic information of two elements in all binary pairs can be averaged or obtained by weighted average calculation.
And finally, determining the confidence of each pairing scheme according to the individual similarity and the overall similarity of each pairing scheme.
For example, the confidence of each pairing scheme may be obtained by averaging the sum of the individual similarities of the two tuples with the overall similarity, or by weighted averaging.
In this embodiment, the confidence of the pairing scheme is comprehensively evaluated based on the individual similarity and the overall similarity of each binary group in the pairing scheme, so that the influence of the extreme effect (excellent or poor) of individual pairing on the confidence of the entire pairing scheme is avoided, and the calculation result of the confidence is more reliable.
The following is an example of a function for calculating a confidence score for a pairing scheme, which is a score calculated by three parts: the sum of the individual similarity, the overall similarity of the distance information and the overall similarity of the semantic information.
match_weight_sum=sum(match_items_[pr_idx][hdm_idx].weight)+CalculateVarianceOfMatchResult(match_result)+CalculateMMConfidence(match_result);(2)
Wherein match _ weight _ sum represents a confidence score of a pairing scheme;
sum (match _ items _ [ pr _ idx ] [ hdm _ idx ]. weight) represents the sum of individual similarity of each binary group in the matching scheme, and the sum is calculated by summing the weights of the selected edges in the matching scheme, namely the sum of the weights of the edges corresponding to each pair of point sets;
the calluevarianceofmatchresult (match _ result) represents the overall similarity of the distance information of each two-tuple in the pairing scheme, which is calculated by the variance of the distance between two elements in each two-tuple in the pairing scheme. Taking the lane lines as an example, there are distances between the paired lane lines, and the variance is the variance of all these distances. Theoretically, the distances between all pairs of perceived lane lines and map lane lines should be equal, i.e. the variance is zero, but in practice, because of the inevitable introduction of errors, the variance may not be zero;
the calluemcontense (match _ result) represents the overall similarity of semantic information of each tuple in the pairing scheme, and is calculated by comparing the semantic similarity between two elements in each tuple. Still taking the lane line as an example, it can be determined whether the attributes and the numbers of all the paired lane lines are consistent. For example, the confidence of all the attributes being consistent is 100%, and the attributes of each pair of lane lines being inconsistent may be set to decrease by 10% for example, and 30% for the number of unmatched confidence.
The confidence score of the pairing scheme can be obtained by calculating the results of the three parts and adding the results.
In step 704, among the plurality of pairing schemes, the pairing scheme with the highest confidence or exceeding a set threshold is determined for the map road element paired with the perception road element.
In this step, the scheme with the highest confidence may be used as the finally selected pairing scheme, or the pairing scheme exceeding a set threshold may be used as the finally selected pairing scheme, so that the map road element paired with the perception road element can be determined.
In the embodiment, the map road elements near the equipment are obtained on the map by utilizing the initial positioning information and are used for being paired with the perception road elements, so that the calculation amount is reduced, the matching speed is improved and the real-time positioning is favorably realized compared with the method for searching the map road elements paired with the perception road elements in the global map.
In one example, when a perception road element in the perception road image is paired with a map road element in the preset range, in response to that the map road element in the perception road image in the preset range cannot determine the paired road element, a null or virtual element is set in the map road element to be paired with the perception road element.
In an ideal situation, the perception road elements in the perception road image correspond to the map road elements in the map one by one, but in the case that the perception road elements are the result of false recognition or the perception road elements appear after the map is built, the map road elements corresponding to the perception road elements cannot be found. By setting null or virtual elements, all perception road elements have paired objects in the process of determining the pairing scheme, so that the pairing scheme is richer, and the optimal pairing scheme is favorably and comprehensively evaluated.
Fig. 9 illustrates a method of determining a positional offset between a paired perceived road element and a map road element, as illustrated in fig. 9, the method comprising:
in step 901, sampling is performed on the pixel points of the perceptual road element, so as to obtain a perceptual sampling point set.
In this step, the pixel points of the perceptual road element may be sampled at fixed intervals (e.g., 0.1 meter), so as to obtain a perceptual sampling point set.
Taking the lane line on the road as an example, the sensing lane line can be abstracted into a point set by sampling the lane line. For the case of parallel multiple lane lines, the lane lines may be arranged in the order from the left to the right of the vehicle, and the corresponding point sets may be arranged from top to bottom according to the order of the lane lines.
In step 902, sampling the pixel points of the map road elements to obtain a map sampling point set.
In this step, the map road elements may be sampled in a manner similar to step 901, to obtain a map sampling point set.
In step 903, a rotational-translational matrix is determined between the sampling points included in each of the set of perceptual sampling points and the set of map sampling points.
For the paired sensing sampling point set and map sampling point set, a rotation and translation matrix between the two point sets can be calculated by using a closest point iteration method. Fig. 10 shows a schematic diagram of the closest point iteration method, and the left side of the arrow represents two associated point sets (paired point sets) input to the algorithm model, and the rotation-translation matrix can be obtained by applying the algorithm model, which may be a least squares algorithm model, for example. By applying the rotation-translation matrix to the input point set, the coincidence of the two point sets can be realized, as shown in fig. 10, and the right side of the arrow indicates the coincident two point sets.
In step 904, coordinate offsets and direction offsets of the perceptual road elements from the map road elements are obtained based on the rotational-translational matrix.
The rotation average shift matrix obtained in step 903 is the positioning offset to be determined, the translation coefficient in the rotation and translation matrix corresponds to the coordinate offset, and the rotation coefficient corresponds to the direction offset.
The initial positioning information may be represented as (x) 0 ,y 00 ) The positioning offset may be represented as (dx, dy, d θ), and correspondingly, the positioning information obtained by correcting the initial positioning information may be represented as:
(x=x 0 +dx,y=y 0 +dy,θ=θ 0 +dθ)。
in one example, after the initial positioning information is corrected according to the offset information to obtain the positioning information of the device, the obtained positioning information and the initial positioning information may be fused again.
For example, the obtained positioning information and the initial positioning information can be fused by Kalman filtering, mean value calculation, weighted average calculation and other methods, so that excessive correction of the positioning information by map information is avoided, and the positioning result is more reliable.
As the camera may not necessarily capture all road elements, that is, the perceived road elements in the perceived road image may be less than the number of map road elements; moreover, although the accuracy of detecting road elements by the neural network may reach ninety percent or more, for the positioning method applied to the automatic driving of the vehicle, it is still required to output accurate attribute information under the condition that the neural network model has missed detection and false detection.
Fig. 11A shows a schematic structural diagram of an apparatus for positioning a device, which may include, as shown in fig. 11A:
An obtaining unit 1101, configured to obtain a perceived road image of a road where a device is located and initial positioning information of the device;
an identifying unit 1102 configured to identify attribute information of a perceptual road element in the perceptual road image;
a determining unit 1103, configured to determine, according to attribute information of the perceptual road element and based on a pre-established map, offset information between the perceptual road element and a map road element in the map;
a correcting unit 1104, configured to correct the initial positioning information according to the offset information, so as to obtain positioning information of the device.
In another embodiment, as shown in fig. 11B, the apparatus further includes a map building unit 1105 configured to: collecting map road images of roads by a collection vehicle; identifying attribute information of map road elements in the map road image; and establishing the map based on the attribute information of the map road elements.
In another embodiment, as shown in fig. 11C, the apparatus further comprises a map building unit 1106 for: obtaining semantic information and position information of map road elements in a high-precision map; and establishing the map based on the semantic information and the position information of the map road elements.
In another embodiment, the determining unit 1103 is specifically configured to: determining map road elements matched with the perception road elements from the map according to the attribute information of the perception road elements; determining positioning information of paired perception road elements and map road elements in the same equipment coordinate system; determining a positioning offset between the paired perceived road element and map road element based on the positioning information.
In another embodiment, the determining unit 1103 is specifically configured to, when determining, from the map, a map road element paired with the perceived road element according to the attribute information of the perceived road element: searching map road elements in a preset range in the map based on the initial positioning information; pairing perception road elements in the perception road image with map road elements in the preset range in pairs based on attribute information to obtain a plurality of pairing schemes, wherein at least one perception road element in different pairing schemes is different from the map road elements in the preset range in pairing mode; determining a confidence level for each of the pairing schemes; determining a map road element paired with the perception road element among the plurality of pairing schemes with highest confidence or exceeding a set threshold.
In another embodiment, the determining unit 1103, when configured to pair the perceived road elements in the perceived road image with the map road elements within the preset range, is further configured to: and setting a null or virtual element in the map road elements to be paired with the perception road elements when the perception road elements in the perception road image cannot determine the paired road elements in the map road elements in the preset range.
In another embodiment, the determining unit 1103, when configured to determine the confidence of each pairing scheme, is specifically configured to: respectively determining the individual similarity of the pairing of each perception road element and each map road element in each pairing scheme; determining the overall similarity of each perception road element and map road element in each matching scheme; and determining the confidence of each pairing scheme according to the individual similarity and the overall similarity of each pairing scheme.
In another embodiment, the positioning offsets comprise coordinate offsets and/or directional offsets; the determining unit 1103, when configured to determine a positioning offset between the paired perceived road element and map road element based on the positioning information, is specifically configured to: sampling pixel points of the perception road elements to obtain a perception sampling point set; sampling pixel points of the map road elements to obtain a map sampling point set; determining a rotational translation matrix between sampling points included in the perception sampling point set and the map sampling point set respectively; and obtaining coordinate offset and direction offset of the perception road element and the map road element based on the rotation and translation matrix.
In another embodiment, the obtaining unit 1101 is specifically configured to: acquiring the perception acquisition image of the road surface where the equipment is located based on a vision sensor arranged on the equipment; determining initial positioning information of the device based on a Global Positioning System (GPS) and/or an Inertial Measurement Unit (IMU) arranged on the device.
In another embodiment, the modification unit 1104 is further configured to: and fusing the obtained positioning information and the initial positioning information again to obtain the corrected positioning information.
Fig. 12 illustrates an apparatus according to at least one embodiment of the present disclosure, which may include a memory 1201, a processor 1202, the memory 1201 being configured to store computer instructions executable on the processor, and the processor 1202 being configured to perform positioning based on a method according to any one of the embodiments of the present disclosure when the computer instructions are executed.
At least one embodiment of the present description also provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the positioning method described in any of the present descriptions.
As will be appreciated by one skilled in the art, one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present description also provides a computer readable storage medium, on which a computer program may be stored, which when executed by a processor, implements the steps of the method for detecting a driver's gaze area described in any one of the embodiments of the present description, and/or implements the steps of the method for training a neural network of a driver's gaze area described in any one of the embodiments of the present description. Wherein "and/or" means having at least one of the two, e.g., "A and/or B" includes three schemes: A. b, and "A and B".
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the data processing apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to part of the description of the method embodiment.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the acts or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in: digital electronic circuitry, tangibly embodied computer software or firmware, computer hardware including the structures disclosed in this specification and their structural equivalents, or a combination of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a tangible, non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or additionally, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode and transmit information to suitable receiver apparatus for execution by the data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Computers suitable for executing computer programs include, for example, general and/or special purpose microprocessors, or any other type of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory and/or a random access memory. The basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Moreover, a computer may be embedded in another device, e.g., a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device such as a Universal Serial Bus (USB) flash drive, to name a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., an internal hard disk or a removable disk), magneto-optical disks, and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. In another aspect, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Further, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The above description is only for the purpose of illustrating the preferred embodiments of the one or more embodiments of the present disclosure, and is not intended to limit the scope of the one or more embodiments of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the one or more embodiments of the present disclosure should be included in the scope of the one or more embodiments of the present disclosure.

Claims (20)

1. A method for device positioning, comprising:
acquiring a perception road image of a road where equipment is located and initial positioning information of the equipment;
identifying attribute information of a perceptual road element in the perceptual road image;
determining offset information between the perception road element and a map road element in a map based on a pre-established map according to the attribute information of the perception road element, wherein the offset information comprises: searching map road elements in a preset range in the map based on the initial positioning information; pairing perception road elements in the perception road image with map road elements in the preset range in pairs based on attribute information to obtain a plurality of pairing schemes, wherein at least one perception road element in different pairing schemes is different from the map road elements in the preset range in pairing mode; determining the confidence of each pairing scheme according to the individual similarity and the overall similarity of each pairing scheme; determining map road elements paired with the perception road elements in the pairing schemes with highest confidence level or exceeding a set threshold value; determining the position information of the paired perception road elements and map road elements in the same equipment coordinate system; determining a positioning offset between the paired perceived road element and map road element based on the location information;
And correcting the initial positioning information according to the offset information to obtain the positioning information of the equipment.
2. The method of claim 1, wherein the creating of the map comprises:
collecting map road images of roads by a collection vehicle;
identifying attribute information of map road elements in the map road image;
and establishing the map based on the attribute information of the map road elements.
3. The method of claim 1, wherein the creating of the map comprises:
obtaining semantic information and position information of map road elements in a high-precision map;
and establishing the map based on the semantic information and the position information of the map road elements.
4. The method according to any of claims 1-3, wherein the pre-established map is a semantic map.
5. The method of claim 1, wherein pairing perceived road elements in the perceived road image with map road elements within the preset range comprises:
and setting a null or virtual element in the map road elements to be paired with the perception road elements when the perception road elements in the perception road image cannot determine the paired road elements in the map road elements in the preset range.
6. The method of claim 1, wherein determining the confidence level for each of the pairing schemes based on the individual similarity and the overall similarity for each of the pairing schemes comprises:
respectively determining the individual similarity of the pairing of each perception road element and each map road element in each pairing scheme;
determining the overall similarity of each perception road element and map road element in each matching scheme;
and determining the confidence of each pairing scheme according to the individual similarity and the overall similarity of each pairing scheme.
7. The method of claim 1, wherein the positioning offsets comprise coordinate offsets and/or directional offsets;
determining a positioning offset between a paired perceived road element and a map road element based on the location information, comprising:
sampling pixel points of the perception road elements to obtain a perception sampling point set;
sampling pixel points of the map road elements to obtain a map sampling point set;
determining a rotational translation matrix between sampling points included in the perception sampling point set and the map sampling point set respectively;
And obtaining coordinate offset and direction offset of the perception road element and the map road element based on the rotation and translation matrix.
8. The method of any one of claims 1 to 3, wherein the obtaining of the perceived road image of the road on which the device is located and the initial positioning information of the device comprises:
acquiring the perception road image of the road surface where the equipment is located based on a vision sensor arranged on the equipment;
determining initial positioning information of the device based on a Global Positioning System (GPS) and/or an Inertial Measurement Unit (IMU) arranged on the device.
9. The method according to any one of claims 1 to 3,
after the initial positioning information is corrected according to the offset information to obtain the positioning information of the device, the method further includes:
and fusing the obtained positioning information and the initial positioning information again to obtain corrected positioning information.
10. An apparatus for device positioning, comprising:
the device comprises an obtaining unit, a processing unit and a processing unit, wherein the obtaining unit is used for obtaining a perception road image of a road where the device is located and initial positioning information of the device;
an identifying unit configured to identify attribute information of a perception road element in the perception road image;
A determining unit, configured to determine, according to the attribute information of the perceptual road element and based on a pre-established map, offset information between the perceptual road element and a map road element in the map, and specifically configured to: searching map road elements in a preset range in the map based on the initial positioning information; pairing perception road elements in the perception road image with map road elements in the preset range in pairs based on attribute information to obtain a plurality of pairing schemes, wherein at least one perception road element in different pairing schemes is different from the map road elements in the preset range in pairing mode; determining the confidence of each pairing scheme according to the individual similarity and the overall similarity of each pairing scheme; determining map road elements paired with the perception road elements in the pairing schemes with highest confidence level or exceeding a set threshold value; determining the position information of the paired perception road elements and map road elements in the same equipment coordinate system; determining a positioning offset between the paired perceived road element and map road element based on the location information;
And the correcting unit is used for correcting the initial positioning information according to the offset information to obtain the positioning information of the equipment.
11. The apparatus of claim 10, further comprising a map building unit configured to:
collecting map road images of roads by a collection vehicle;
identifying attribute information of map road elements in the map road image;
and establishing the map based on the attribute information of the map road elements.
12. The apparatus of claim 10, further comprising a map building unit configured to:
obtaining semantic information and position information of map road elements in a high-precision map;
and establishing the map based on the semantic information and the position information of the map road elements.
13. The apparatus of any of claims 10-12, wherein the pre-established map is a semantic map.
14. The apparatus according to claim 10, wherein the determining unit, when being configured to match the perceived road elements in the perceived road image with the map road elements within the preset range, is further configured to:
and setting a null or virtual element in the map road elements to be paired with the perception road elements when the perception road elements in the perception road image cannot determine the paired road elements in the map road elements in the preset range.
15. The apparatus according to claim 10, wherein the determining unit, when configured to determine the confidence level of each of the pairing schemes according to the individual similarity and the overall similarity of each of the pairing schemes, is specifically configured to:
respectively determining the individual similarity of the pairing of each perception road element and each map road element in each pairing scheme;
determining the overall similarity of each perception road element and map road element in each matching scheme;
and determining the confidence of each pairing scheme according to the individual similarity and the overall similarity of each pairing scheme.
16. The apparatus of claim 10, wherein the positioning offset comprises a coordinate offset and/or a directional offset;
the determining unit, when configured to determine a positioning offset between the paired perceived road element and map road element based on the location information, is specifically configured to:
sampling pixel points of the perception road elements to obtain a perception sampling point set;
sampling pixel points of the map road elements to obtain a map sampling point set;
determining a rotational translation matrix between sampling points included in the perception sampling point set and the map sampling point set respectively;
And obtaining coordinate offset and direction offset of the perception road element and the map road element based on the rotation and translation matrix.
17. The apparatus according to any one of claims 10 to 12, wherein the obtaining unit is specifically configured to:
acquiring the perception road image of the road surface where the equipment is located based on a vision sensor arranged on the equipment;
determining initial positioning information of the device based on a Global Positioning System (GPS) and/or an Inertial Measurement Unit (IMU) arranged on the device.
18. The apparatus according to any one of claims 10 to 12, wherein the correction unit is further configured to:
and fusing the obtained positioning information and the initial positioning information again to obtain corrected positioning information.
19. An apparatus, comprising a memory for storing computer instructions executable on a processor, the processor for locating based on the method of any one of claims 1 to 9 when executing the computer instructions, the apparatus comprising a processor.
20. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method of any one of claims 1 to 9.
CN201910377570.5A 2019-05-07 2019-05-07 Method, device and equipment for positioning equipment Active CN111912416B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201910377570.5A CN111912416B (en) 2019-05-07 2019-05-07 Method, device and equipment for positioning equipment
PCT/CN2020/075069 WO2020224305A1 (en) 2019-05-07 2020-02-13 Method and apparatus for device positioning, and device
KR1020217039850A KR20220004203A (en) 2019-05-07 2020-02-13 Methods, devices and devices for instrument positioning
JP2021565799A JP2022531679A (en) 2019-05-07 2020-02-13 Device positioning methods, devices, and devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910377570.5A CN111912416B (en) 2019-05-07 2019-05-07 Method, device and equipment for positioning equipment

Publications (2)

Publication Number Publication Date
CN111912416A CN111912416A (en) 2020-11-10
CN111912416B true CN111912416B (en) 2022-07-29

Family

ID=73051017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910377570.5A Active CN111912416B (en) 2019-05-07 2019-05-07 Method, device and equipment for positioning equipment

Country Status (4)

Country Link
JP (1) JP2022531679A (en)
KR (1) KR20220004203A (en)
CN (1) CN111912416B (en)
WO (1) WO2020224305A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112710301B (en) * 2020-12-09 2023-01-06 上汽大众汽车有限公司 High-precision positioning method and system for automatic driving vehicle
CN112785645A (en) * 2020-12-31 2021-05-11 北京嘀嘀无限科技发展有限公司 Terminal positioning method and device and electronic equipment
CN112985444B (en) * 2021-03-31 2023-03-24 上海商汤临港智能科技有限公司 Method and device for constructing navigation elements in map
CN113156411B (en) * 2021-05-03 2022-05-20 湖北汽车工业学院 Vehicle-mounted laser radar calibration method
CN113701770A (en) * 2021-07-16 2021-11-26 西安电子科技大学 High-precision map generation method and system
CN114136333A (en) * 2021-10-15 2022-03-04 阿波罗智能技术(北京)有限公司 High-precision map road data generation method, device and equipment based on hierarchical features
CN114111813A (en) * 2021-10-18 2022-03-01 阿波罗智能技术(北京)有限公司 High-precision map element updating method and device, electronic equipment and storage medium
CN117330097A (en) * 2023-12-01 2024-01-02 深圳元戎启行科技有限公司 Vehicle positioning optimization method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299217A (en) * 2008-06-06 2008-11-05 北京搜狗科技发展有限公司 Method, apparatus and system for processing map information
KR101454153B1 (en) * 2013-09-30 2014-11-03 국민대학교산학협력단 Navigation system for unmanned ground vehicle by sensor fusion with virtual lane
CN106127180A (en) * 2016-06-30 2016-11-16 广东电网有限责任公司电力科学研究院 A kind of robot assisted localization method and device
CN106767853A (en) * 2016-12-30 2017-05-31 中国科学院合肥物质科学研究院 A kind of automatic driving vehicle high-precision locating method based on Multi-information acquisition
CN107084727A (en) * 2017-04-12 2017-08-22 武汉理工大学 A kind of vision positioning system and method based on high-precision three-dimensional map
CN109186616A (en) * 2018-09-20 2019-01-11 禾多科技(北京)有限公司 Lane line assisted location method based on high-precision map and scene search
CN109212571A (en) * 2017-06-29 2019-01-15 沈阳新松机器人自动化股份有限公司 Navigation locating method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10012471A1 (en) * 2000-03-15 2001-09-20 Bosch Gmbh Robert Navigation system imaging for position correction avoids error build up on long journeys
JP3958133B2 (en) * 2002-07-12 2007-08-15 アルパイン株式会社 Vehicle position measuring apparatus and method
KR101919366B1 (en) * 2011-12-22 2019-02-11 한국전자통신연구원 Apparatus and method for recognizing vehicle location using in-vehicle network and image sensor
CN103954275B (en) * 2014-04-01 2017-02-08 西安交通大学 Lane line detection and GIS map information development-based vision navigation method
KR102374919B1 (en) * 2017-10-16 2022-03-16 주식회사 만도모빌리티솔루션즈 Device And Method of Automatic Driving Support
CN109345589A (en) * 2018-09-11 2019-02-15 百度在线网络技术(北京)有限公司 Method for detecting position, device, equipment and medium based on automatic driving vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299217A (en) * 2008-06-06 2008-11-05 北京搜狗科技发展有限公司 Method, apparatus and system for processing map information
KR101454153B1 (en) * 2013-09-30 2014-11-03 국민대학교산학협력단 Navigation system for unmanned ground vehicle by sensor fusion with virtual lane
CN106127180A (en) * 2016-06-30 2016-11-16 广东电网有限责任公司电力科学研究院 A kind of robot assisted localization method and device
CN106767853A (en) * 2016-12-30 2017-05-31 中国科学院合肥物质科学研究院 A kind of automatic driving vehicle high-precision locating method based on Multi-information acquisition
CN107084727A (en) * 2017-04-12 2017-08-22 武汉理工大学 A kind of vision positioning system and method based on high-precision three-dimensional map
CN109212571A (en) * 2017-06-29 2019-01-15 沈阳新松机器人自动化股份有限公司 Navigation locating method and device
CN109186616A (en) * 2018-09-20 2019-01-11 禾多科技(北京)有限公司 Lane line assisted location method based on high-precision map and scene search

Also Published As

Publication number Publication date
WO2020224305A1 (en) 2020-11-12
JP2022531679A (en) 2022-07-08
KR20220004203A (en) 2022-01-11
CN111912416A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN111912416B (en) Method, device and equipment for positioning equipment
WO2021073656A1 (en) Method for automatically labeling image data and device
CN109074085B (en) Autonomous positioning and map building method and device and robot
WO2018142900A1 (en) Information processing device, data management device, data management system, method, and program
Zhao et al. A vehicle-borne urban 3-D acquisition system using single-row laser range scanners
CN109031304A (en) Vehicle positioning method in view-based access control model and the tunnel of millimetre-wave radar map feature
US20220011117A1 (en) Positioning technology
CN111199564A (en) Indoor positioning method and device of intelligent mobile terminal and electronic equipment
US20180189577A1 (en) Systems and methods for lane-marker detection
JP5404861B2 (en) Stationary object map generator
CN111261016B (en) Road map construction method and device and electronic equipment
Cao et al. Camera to map alignment for accurate low-cost lane-level scene interpretation
CN112631288B (en) Parking positioning method and device, vehicle and storage medium
JP2008065087A (en) Apparatus for creating stationary object map
CN112805766A (en) Apparatus and method for updating detailed map
CN110018503B (en) Vehicle positioning method and positioning system
CN108256563B (en) Visual dictionary closed-loop detection method and device based on distance measurement
CN111982132B (en) Data processing method, device and storage medium
CN116184430B (en) Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit
CN112304322B (en) Restarting method after visual positioning failure and vehicle-mounted terminal
WO2020113425A1 (en) Systems and methods for constructing high-definition map
CN112651991A (en) Visual positioning method, device and computer system
CN114111817B (en) Vehicle positioning method and system based on SLAM map and high-precision map matching
JP2012099010A (en) Image processing apparatus and image processing program
CN113227713A (en) Method and system for generating environment model for positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant