CN113048988A - Method and device for detecting change elements of scene corresponding to navigation map - Google Patents

Method and device for detecting change elements of scene corresponding to navigation map Download PDF

Info

Publication number
CN113048988A
CN113048988A CN201911370525.3A CN201911370525A CN113048988A CN 113048988 A CN113048988 A CN 113048988A CN 201911370525 A CN201911370525 A CN 201911370525A CN 113048988 A CN113048988 A CN 113048988A
Authority
CN
China
Prior art keywords
map
detected
change state
state information
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911370525.3A
Other languages
Chinese (zh)
Other versions
CN113048988B (en
Inventor
周旺
郝以平
单乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chusudu Technology Co ltd
Original Assignee
Beijing Chusudu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chusudu Technology Co ltd filed Critical Beijing Chusudu Technology Co ltd
Priority to CN201911370525.3A priority Critical patent/CN113048988B/en
Publication of CN113048988A publication Critical patent/CN113048988A/en
Application granted granted Critical
Publication of CN113048988B publication Critical patent/CN113048988B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a method and a device for detecting change elements of a scene corresponding to a navigation map, wherein the method comprises the following steps: obtaining a map to be detected and a plurality of reference data corresponding to the map to be detected, wherein each reference data comprises: a reference track and a semantic detection result corresponding to the reference track; for each piece of reference data, determining first change state information corresponding to map elements in a scene to be detected corresponding to the reference data based on a reference track contained in the reference data and a semantic detection result corresponding to the reference track and the map to be detected; determining second change state information corresponding to map elements in a scene to be detected by using a reference track and a semantic detection result corresponding to the reference track included in the plurality of reference data and the map to be detected; and determining target change state information corresponding to the map elements in the scene to be detected based on the first change state information and the second change state information so as to realize accurate detection of changed elements in the scene corresponding to the navigation map.

Description

Method and device for detecting change elements of scene corresponding to navigation map
Technical Field
The invention relates to the technical field of navigation map processing, in particular to a method and a device for detecting change elements of a navigation map.
Background
In the fields of automatic driving technology and auxiliary driving technology, vehicle positioning technology is of great importance, and in order to ensure the accuracy of a vehicle positioning result determined by the vehicle positioning technology in the automatic driving or auxiliary driving process, the accuracy of a navigation map corresponding to a driving scene, such as a high-precision map, is of great importance.
Accordingly, in order to ensure the accuracy of the determined vehicle positioning result in the automatic driving or the auxiliary driving process, the navigation map corresponding to the driving scene should also change along with the change of the driving scene.
Therefore, after the navigation map corresponding to the driving scene is successfully manufactured, how to easily observe the change of the driving scene, and further, it is important to determine that the navigation map corresponding to the driving scene needs to be changed.
Disclosure of Invention
The invention provides a method and a device for detecting changed elements of a scene corresponding to a navigation map, which are used for accurately detecting the changed elements in the scene corresponding to the navigation map. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for detecting a change element of a scene corresponding to a navigation map, where the method includes:
obtaining a map to be detected and a plurality of reference data corresponding to the map to be detected, wherein each reference data comprises: the method comprises the following steps of (1) referring to a track and a semantic detection result corresponding to the track, wherein the reference track is a track generated by a vehicle running in a to-be-detected scene corresponding to a to-be-detected map;
for each reference datum, determining first change state information corresponding to a map element in the scene to be detected corresponding to the reference datum based on a reference track contained in the reference datum and a semantic detection result corresponding to the reference track and the map to be detected;
determining second change state information corresponding to map elements in the scene to be detected by using the reference tracks and the semantic detection results corresponding to the reference tracks included in the plurality of reference data and the map to be detected;
and determining target change state information corresponding to the map element in the scene to be detected based on the first change state information and the second change state information.
Optionally, the step of determining, for each piece of reference data, first change state information corresponding to a map element in the scene to be detected corresponding to the reference data based on the reference track included in the reference data and the semantic detection result corresponding to the reference track and the map to be detected includes:
for each piece of reference data, determining a current map corresponding to the reference data based on a reference track contained in the reference data and a semantic detection result corresponding to the reference track;
and for each reference datum, determining first change state information corresponding to a map element in the scene to be detected corresponding to the reference datum based on the current map corresponding to the reference datum and the map to be detected.
Optionally, the reference track includes a plurality of track points; each track point corresponds to a semantic sub-detection result, and the semantic sub-detection result corresponding to each track point is as follows: determining a result based on an image acquired by the image acquisition equipment of the corresponding vehicle at the position of the track point; each semantic sub-detection result comprises a detected map element and image position information thereof;
the step of determining the current map corresponding to each reference data based on the reference track contained in the reference data and the semantic detection result corresponding to the reference track comprises:
and aiming at each reference data, acquiring equipment pose information corresponding to each track point in a reference track contained in the reference data, wherein the equipment pose information corresponding to the track point is as follows: acquiring the device pose information of the image acquisition device of the corresponding vehicle at the position of the track point;
and for each piece of reference data, determining first spatial position information of each detected map element based on equipment pose information corresponding to each track point of a reference track contained in the reference data and detected map elements and image position information thereof contained in semantic sub-detection results corresponding to each track point, and obtaining a current map corresponding to the reference data containing each detected map element and the first spatial position information thereof.
Optionally, the map to be detected includes a drawing map element and drawing spatial position information thereof; the map elements in the scene to be detected comprise: the detected map element and the drawing map element included in the map to be detected;
the step of determining, for each reference data, first change state information corresponding to a map element in the scene to be detected corresponding to the reference data based on the current map corresponding to the reference data and the map to be detected includes:
and for each reference datum, determining first change state information corresponding to the map element in the scene to be detected corresponding to the reference datum based on each detected map element and first spatial position information thereof in the current map corresponding to the reference datum, and each drawn map element and drawing spatial position information thereof in the map to be detected.
Optionally, the reference track includes a plurality of track points; each track point corresponds to a semantic sub-detection result, and the semantic sub-detection result corresponding to each track point is as follows: determining a result based on an image acquired by image acquisition equipment of the corresponding vehicle when the vehicle is located at the track point; each semantic sub-detection result comprises a detected map element and image position information thereof;
the step of determining second change state information corresponding to map elements in the scene to be detected by using the reference tracks and the semantic detection results corresponding to the reference tracks included in the plurality of reference data and the map to be detected includes:
determining a crowd-sourced map by using the device pose information corresponding to each track point of the reference track and included by the plurality of reference data, and the detected map elements and the image position information thereof included by the semantic sub-detection result corresponding to each track point of the reference track, wherein the crowd-sourced map includes the detected map elements and the second spatial position information thereof, and the device pose information corresponding to each track point is as follows: when the corresponding vehicle is positioned at the track point, the pose information of the image acquisition equipment of the vehicle is acquired;
and determining second change state information corresponding to the map elements in the scene to be detected based on the detected map elements and the second spatial position information thereof included in the crowd-sourced map, and the drawn map elements and the drawn spatial position information thereof included in the map to be detected.
Optionally, the first change state information includes first change state information of a map element in the scene to be detected, which is determined based on semantic detection results corresponding to the respective reference data; the second change state information comprises second change state information of the map elements in the scene to be detected;
the step of determining target change state information corresponding to the map element in the scene to be detected based on the first change state information and the second change state information includes:
counting the number of first change state information corresponding to each map element in the scene to be detected by using the first change state information as a first number; counting the number of first change state information representing the change of the map element as a second number;
determining third change state information corresponding to each map element in the scene to be detected by using the first number and the second number corresponding to the map element;
for each map element in the scene to be detected, determining target change state information corresponding to the map element by using third change state information corresponding to the map element and second change state information corresponding to the map element; if the third change state information and the second change state information corresponding to the map element both represent that the map element has changed, and the change conditions are the same, determining that the target change state information corresponding to the map element is changed; and if the third change state information and the second change state information corresponding to the map element both represent that the map element is not changed, determining that the target change state information corresponding to the map element is unchanged.
In a second aspect, an embodiment of the present invention provides an apparatus for detecting a change element of a scene corresponding to a navigation map, where the apparatus includes:
an obtaining module configured to obtain a map to be detected and a plurality of reference data corresponding to the map to be detected, wherein each reference data includes: the method comprises the following steps of (1) referring to a track and a semantic detection result corresponding to the track, wherein the reference track is a track generated by a vehicle running in a to-be-detected scene corresponding to a to-be-detected map;
the first determining module is configured to determine, for each piece of reference data, first change state information corresponding to a map element in the scene to be detected corresponding to the reference data based on a reference track contained in the reference data and a semantic detection result corresponding to the reference track and the map to be detected;
the second determining module is configured to determine second change state information corresponding to map elements in the scene to be detected by using the reference tracks and the semantic detection results corresponding to the reference tracks included in the plurality of reference data and the map elements to be detected;
a third determining module configured to determine, based on the first change state information and the second change state information, target change state information corresponding to a map element in the scene to be detected.
Optionally, the first determining module includes:
the first determining unit is configured to determine, for each piece of reference data, a current map corresponding to the reference data based on a reference track contained in the reference data and a semantic detection result corresponding to the reference track;
the second determining unit is configured to compare the current map corresponding to each reference data with the map to be detected, and determine first change state information corresponding to map elements in the scene to be detected corresponding to the reference data.
Optionally, the reference track includes a plurality of track points; each track point corresponds to a semantic sub-detection result, and the semantic sub-detection result corresponding to each track point is as follows: determining a result based on an image acquired by the image acquisition equipment of the corresponding vehicle at the position of the track point; each semantic sub-detection result comprises a detected map element and image position information thereof;
the first determining unit is configured to obtain, for each reference data, device pose information corresponding to each track point in a reference track included in the reference data, where the device pose information corresponding to the track point is: acquiring the device pose information of the image acquisition device of the corresponding vehicle at the position of the track point;
and for each piece of reference data, determining first spatial position information of each detected map element based on equipment pose information corresponding to each track point of a reference track contained in the reference data and detected map elements and image position information thereof contained in semantic sub-detection results corresponding to each track point, and obtaining a current map corresponding to the reference data containing each detected map element and the first spatial position information thereof.
Optionally, the map to be detected includes a drawing map element and drawing spatial position information thereof; the map elements in the scene to be detected comprise: the detected map element and the drawing map element included in the map to be detected;
the second determining unit is configured to determine, for each reference data, first change state information corresponding to each detected map element and first spatial position information thereof in the current map corresponding to the reference data, and first change state information corresponding to each drawn map element and drawing spatial position information thereof in the to-be-detected map corresponding to the reference data.
Optionally, the reference track includes a plurality of track points; each track point corresponds to a semantic sub-detection result, and the semantic sub-detection result corresponding to each track point is as follows: determining a result based on an image acquired by image acquisition equipment of the corresponding vehicle when the vehicle is located at the track point; each semantic sub-detection result comprises a detected map element and image position information thereof;
the second determining module is specifically configured to determine a crowd-sourced map by using device pose information corresponding to each track point of a reference track included in the plurality of reference data, and detected map elements and image position information thereof included in semantic sub-detection results corresponding to each track point of the reference track, where the crowd-sourced map includes the detected map elements and second spatial position information thereof, and the device pose information corresponding to each track point is: when the corresponding vehicle is positioned at the track point, the pose information of the image acquisition equipment of the vehicle is acquired;
and determining second change state information corresponding to the map elements in the scene to be detected based on the detected map elements and the second spatial position information thereof included in the crowd-sourced map, and the drawn map elements and the drawn spatial position information thereof included in the map to be detected.
Optionally, the first change state information includes first change state information of a map element in the scene to be detected, which is determined based on semantic detection results corresponding to the respective reference data; the second change state information comprises second change state information of the map elements in the scene to be detected;
the third determining module is specifically configured to count, by using the first change state information, the number of first change state information corresponding to each map element in the scene to be detected, as a first number; counting the number of first change state information representing the change of the map element as a second number;
determining third change state information corresponding to each map element in the scene to be detected by using the first number and the second number corresponding to the map element;
for each map element in the scene to be detected, determining target change state information corresponding to the map element by using third change state information corresponding to the map element and second change state information corresponding to the map element; if the third change state information and the second change state information corresponding to the map element both represent that the map element has changed, and the change conditions are the same, determining that the target change state information corresponding to the map element is changed; and if the third change state information and the second change state information corresponding to the map element both represent that the map element is not changed, determining that the target change state information corresponding to the map element is unchanged.
As can be seen from the above, the method and device for detecting a change element of a scene corresponding to a navigation map provided in an embodiment of the present invention can obtain a map to be detected and a plurality of reference data corresponding to the map to be detected, where each reference data includes: the method comprises the following steps of (1) referring to a track and a semantic detection result corresponding to the track, wherein the reference track is a track generated by a vehicle running in a to-be-detected scene corresponding to a to-be-detected map; for each piece of reference data, determining first change state information corresponding to map elements in a scene to be detected corresponding to the reference data based on a reference track contained in the reference data and a semantic detection result corresponding to the reference track and the map to be detected; determining second change state information corresponding to map elements in a scene to be detected by using a reference track and a semantic detection result corresponding to the reference track included in the plurality of reference data and the map to be detected; and determining target change state information corresponding to the map elements in the scene to be detected based on the first change state information and the second change state information.
By applying the embodiment of the invention, the accuracy of the corresponding information of each map element in the scene to be detected can be improved to a certain extent through the reference tracks included by the plurality of reference data and the semantic detection results corresponding to the reference tracks, and the second change state information corresponding to the map elements in the scene to be detected is determined by comparing the information with the map to be detected; and then, by combining the first change state information corresponding to the map element in the scene to be detected corresponding to each reference data and the second change state information corresponding to the map element in the scene to be detected, the target change state information corresponding to the map element in the scene to be detected is determined together, so that the accuracy of the determined change state information of the map element in the scene to be detected can be improved to a certain extent. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
The innovation points of the embodiment of the invention comprise:
1. the accuracy of corresponding information of each map element in the scene to be detected can be improved to a certain extent through the reference tracks included in the plurality of reference data and the semantic detection results corresponding to the reference tracks, and the second change state information corresponding to the map elements in the scene to be detected is determined by comparing the information with the map to be detected; and then, by combining the first change state information corresponding to the map element in the scene to be detected corresponding to each reference data and the second change state information corresponding to the map element in the scene to be detected, the target change state information corresponding to the map element in the scene to be detected is determined together, so that the accuracy of the determined change state information of the map element in the scene to be detected can be improved to a certain extent.
2. Firstly, counting the number of first change state information corresponding to each map element in a scene to be detected by using the first change state information as a first number; and counting the number of first change state information representing the change of the map element as a second number, determining third change state information corresponding to the map element based on the first number and the second number corresponding to the map element, and improving the accuracy of the third change state information corresponding to each map element in the scene to be detected to a certain extent.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
Fig. 1 is a schematic flow chart of a method for detecting a change element of a scene corresponding to a navigation map according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating an implementation manner of S104 according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a device for detecting a change element of a scene corresponding to a navigation map according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The invention provides a method and a device for detecting changed elements of a scene corresponding to a navigation map, which are used for accurately detecting the changed elements in the scene corresponding to the navigation map. The following provides a detailed description of embodiments of the invention.
Fig. 1 is a schematic flow chart of a method for detecting a change element of a scene corresponding to a navigation map according to an embodiment of the present invention. The method may comprise the steps of:
s101: obtaining a map to be detected and a plurality of reference data corresponding to the map to be detected.
Wherein each reference data comprises: and the reference track is a track generated by the vehicle running in the to-be-detected scene corresponding to the to-be-detected map.
The method provided by the embodiment of the invention can be applied to any electronic equipment with computing capability, and the electronic equipment can be a server or terminal equipment.
In this step, the electronic device may obtain a navigation map to be detected as a map to be detected, and obtain a plurality of reference data corresponding to the map to be detected. The map to be detected is a map drawn for a scene to be detected, and the image to be detected can be a high-precision map. Each reference data may include: the method comprises a reference track and a semantic detection result corresponding to the reference track, wherein the reference track is a track generated by a vehicle running in a to-be-detected scene corresponding to a to-be-detected map. Each reference track may include a plurality of track points, each track point corresponding to a semantic sub-detection result. The semantic sub-detection result corresponding to each track point is as follows: the result of detection is from an image captured by an image capturing device of the vehicle when the vehicle is at the locus point.
It can be understood that, when a vehicle is at each track point in the driving process of the vehicle in the scene to be detected corresponding to the map to be detected, the image acquisition device of the vehicle may acquire an image for the environment around the track point, and further, the electronic device for detecting the change element of the scene corresponding to the navigation map provided by the embodiment of the present invention, or other electronic devices, may perform semantic detection for the image acquired by the image acquisition device of the vehicle, specifically: the semantic detection can be performed on the acquired image by using a pre-established semantic detection model to obtain a semantic detection result corresponding to the acquired image, namely a semantic sub-detection result corresponding to the track point of the reference track. Correspondingly, the semantic sub-detection result corresponding to each track point included in the reference track forms the semantic detection result corresponding to the reference track.
If the device for semantic detection of the image acquired by the image acquisition device of the vehicle is other electronic devices, the electronic device for detecting the change element of the scene corresponding to the navigation map provided by the embodiment of the invention can directly obtain the semantic detection result corresponding to each track point included in the reference track from the other electronic devices. The other electronic devices are: the method is different from the equipment of the electronic equipment for detecting the change elements of the scene corresponding to the navigation map provided by the embodiment of the invention.
The pre-established semantic detection model may be a neural network model trained based on a sample image labeled with a map element, for example, a convolutional neural network model. The training process of the pre-established semantic detection model can refer to the training process of a neural network model in the related technology, and is not described herein again.
The semantic sub-detection result may include: the shape, size, category, and position information in the corresponding image of the map element detected from the corresponding image. The map elements include, but are not limited to: lane lines, light poles, traffic signs, parking spaces, zebra crossings and the like. For clarity of description, in the embodiment of the present invention, a map element detected from an image is referred to as a detected map element.
Each reference data may be in the form of a data packet, that is, each reference data is a data packet. Different reference data may include different reference tracks, and the different reference tracks may be generated by the same vehicle driving in the scene to be detected, or may be generated by different vehicles driving in the scene to be detected.
S102: and determining first change state information corresponding to map elements in the scene to be detected corresponding to the reference data based on the reference track contained in the reference data and the semantic detection result corresponding to the reference track and the map to be detected.
In an implementation manner of the present invention, after the electronic device obtains the image to be detected and the reference data corresponding to the image to be detected, it may obtain, for each reference data, device pose information corresponding to each track point in the reference track included in the reference data, and further determine, based on the device pose information corresponding to each track point in the reference track included in the reference data, a map area corresponding to each track point in the reference track from the map to be detected; for each track point in the reference track, projecting a drawing map element contained in a map area corresponding to the track point in the reference track into an image corresponding to the track point by using equipment pose information corresponding to the track point in the reference track and a preset projection model corresponding to the reference track to obtain projection position information of the drawing map element contained in the map area corresponding to the track point in the image corresponding to the track point; determining the change state information corresponding to the map element in the to-be-detected scene partial area corresponding to the map area corresponding to the track point based on the projection position information of the map element in the image corresponding to the track point, which is contained in the map area corresponding to the track point, and the semantic sub-detection information corresponding to the track point; and then, for each reference datum, determining first change state information corresponding to the map element in the scene to be detected corresponding to the reference datum based on the change state information corresponding to the map element in the scene area to be detected corresponding to the map area corresponding to each track point in the reference track contained in the reference datum.
Wherein, the corresponding equipment position appearance information of orbit point is: and when the vehicle corresponding to the track point is positioned at the track point, the pose information of the image acquisition equipment of the vehicle is acquired. The preset projection model corresponding to the reference track is as follows: a preset projection model of an image capture device of the vehicle of the reference trajectory is generated. The images corresponding to the track points are: when the vehicle corresponding to the track point is positioned at the track point, the image is acquired by the image acquisition equipment of the vehicle.
In the embodiment of the invention, for clarity of description, map elements contained in the map to be detected are called drawing map elements.
In an implementation manner, when determining the first change state information corresponding to the map element in the to-be-detected scene corresponding to each reference data, the first change state information corresponding to the map element in the to-be-detected scene corresponding to each reference data may be stored in a preset storage space, so as to facilitate execution of a subsequent process and subsequent checking by a worker. In one case, the preset storage space may be referred to as an update repository, and is used to store change state information corresponding to each determined map element of the scene to be detected, where the change state information includes first change state information, subsequent second change state information, and target state information.
In another implementation manner of the present invention, the step S102 may include the following steps 011-:
011: and for each piece of reference data, determining a current map corresponding to the reference data based on the reference track contained in the reference data and the semantic detection result corresponding to the reference track.
012: and for each reference datum, determining first change state information corresponding to map elements in the scene to be detected corresponding to the reference datum based on the current map corresponding to the reference datum and the map to be detected.
In this implementation, for each piece of reference data, the electronic device may determine, based on a reference trajectory included in the reference data and a semantic detection result corresponding to the reference trajectory, a current map corresponding to the reference data, where the current map includes: and when the reference track contained in the reference data is generated, the map element and the spatial position information thereof in the partial region of the scene to be detected corresponding to the reference track contained in the reference data. Correspondingly, the map to be detected comprises: and drawing the map elements and the spatial position information thereof in the scene to be detected at the corresponding time of the map to be detected. The electronic equipment compares the current map corresponding to the reference data with the map to be detected according to each reference data, and determines first change state information corresponding to map elements in the scene to be detected corresponding to the reference data.
In one implementation of the invention, the reference trajectory comprises a plurality of trajectory points; each track point corresponds to a semantic sub-detection result, and the semantic sub-detection result corresponding to each track point is as follows: determining a result based on an image acquired by the image acquisition equipment of the corresponding vehicle at the position of the track point; each semantic sub-detection result comprises a detected map element and image position information thereof;
the 011 can include the following steps 0111-:
0111: and aiming at each reference data, acquiring the equipment pose information corresponding to each track point in the reference track contained in the reference data.
Wherein, the corresponding equipment position appearance information of orbit point is: and acquiring the device pose information of the image acquisition device of the corresponding vehicle at the position of the track point.
0112: and for each piece of reference data, determining first spatial position information of each detected map element based on equipment pose information corresponding to each track point of a reference track contained in the reference data and detected map elements and image position information thereof contained in semantic sub-detection results corresponding to each track point, and obtaining a current map corresponding to the reference data containing each detected map element and the first spatial position information thereof.
In the implementation mode, the electronic equipment acquires equipment pose information corresponding to each track point in the reference track contained in each reference data; furthermore, the electronic device maps each detection map element included in each semantic sub-detection result to the device coordinate system of the image acquisition device corresponding to the reference track based on the detected map element and the image position information thereof included in the semantic sub-detection result corresponding to each track point of the reference track included in the reference data and the preset projection matrix corresponding to the reference track, so as to obtain each detection map element included in each semantic sub-detection result and the device coordinate information in the device coordinate system of the image acquisition device corresponding to the reference track; and further, determining spatial position information of each detection map element under a preset spatial rectangular coordinate system by using the device coordinate information of each detection map element under the device coordinate system of the image acquisition device corresponding to the reference track and the device pose information corresponding to each track point, wherein the spatial position information is used as first spatial position information, and obtaining a current map corresponding to the reference data and containing each detection map element and the first spatial position information thereof.
The image acquisition equipment corresponding to the reference track is the image acquisition equipment of the vehicle generating the reference track.
Correspondingly, in one implementation manner of the invention, the map to be detected comprises drawing map elements and drawing space position information thereof; the map elements in the scene to be detected comprise: detecting map elements and drawing map elements included in the map to be detected; the map to be detected comprises drawing space position information of drawing map elements, which is position information under the preset space rectangular coordinate system, wherein the preset space rectangular coordinate system can be a world coordinate system;
the 012 may include:
and for each piece of reference data, determining first change state information corresponding to the map element in the scene to be detected corresponding to the reference data based on each detected map element and first spatial position information thereof in the current map corresponding to the reference data and each drawn map element and drawing spatial position information thereof in the map to be detected.
In this embodiment, the electronic device may traverse the map to be detected based on each detected map element in the current map corresponding to each reference data, and determine whether a drawing map element identical to the detected map element exists in each drawing map element of the map to be detected for each detected map element; after judging that the same drawing map element as the detection map element exists in all drawing map elements of the map to be detected, judging whether the drawing spatial position information of the drawing map element same as the detection map element is matched with the first spatial position information of the detection map element.
If the drawing space position information of the drawing map element which is the same as the detection map element is judged to be matched with the first space position information of the detection map element; determining that the first change state condition corresponding to the detection map element in the field to be detected, namely the drawing map element same as the detection map element, includes: and representing the information that the detection map elements in the field to be detected, namely the drawing map elements which are the same as the detection map elements, do not change.
If the drawing space position information of the drawing map element which is the same as the detection map element is judged not to be matched with the first space position information of the detection map element; determining that the first change state condition corresponding to the detection map element in the field to be detected, namely the drawing map element same as the detection map element, includes: and representing the information that the position of the detection map element in the field to be detected, namely the drawing map element same as the detection map element, changes. In this case, the first change state situation corresponding to the detection map element, that is, the same drawing map element as the detection map element, in the field to be detected may further include: the table indicates information of specific change conditions of the positions of the detection map elements, that is, the same map drawing elements as the detection map elements, in the field to be detected, and the information may be, for example: such as by shifting the detection map element yy meters in the xx direction. In one case, if the type of the detected map element is a traffic signboard, the information indicating the specific change condition of the position of the detected map element, i.e. the same mapping map element as the detected map element, in the field to be detected may be: the traffic signboard rotates by an angle of T in the zz direction.
The process of determining whether the drawing spatial position information of the drawing map element identical to the detection map element matches the first spatial position information of the detection map element may be: judging whether the distance between the drawing space position information of the drawing map element which is the same as the detection map element and the first space position information of the detection map element meets a preset error distance or not; when it is determined that the distance between the drawing spatial position information of the drawing map element identical to the detection map element and the first spatial position information of the detection map element satisfies a preset error distance, it is determined that the drawing spatial position information of the drawing map element identical to the detection map element matches the first spatial position information of the detection map element. Accordingly, when it is determined that the distance between the drawing spatial position information of the drawing map element identical to the detection map element and the first spatial position information of the detection map element does not satisfy the preset error distance, it is determined that the drawing spatial position information of the drawing map element identical to the detection map element does not match the first spatial position information of the detection map element.
If it is determined that the same drawing map element as the detection map element does not exist in each drawing map element of the map to be detected, it may be first determined whether there is a drawing map element whose drawing spatial position information matches the first spatial position information of the detection map element and whose corresponding category is the same as the category to which the detection map element corresponds in each drawing map element of the map to be detected, and if it is determined that there is a drawing spatial position information matching the first spatial position information of the detection map element and whose corresponding category is the same as the category to which the detection map element corresponds, it is determined that the attribute of the detection map element is changed, that is, the drawing spatial position information matches the first spatial position information of the detection map element, and the attribute of the drawing map element whose corresponding category is the same; correspondingly determining the detection map element in the scene to be detected, namely matching the drawing spatial position information with the first spatial position information of the detection map element, wherein the first change state information corresponding to the drawing map element with the same category as the category corresponding to the detection map element comprises: and representing the detection map element in the scene to be detected, namely the drawing spatial position information is matched with the first spatial position information of the detection map element, and the attribute of the drawing map element of which the corresponding category is the same as the category corresponding to the detection map element is changed. For example, when the type of the detected map element is a lane line, the condition that the attribute of the detected map element is changed may be: the lane line is changed from a solid line to a dotted line, or from a dotted line to a solid line.
If it is determined that there is no drawn map element in which the drawing spatial position information matches the first spatial position information of the detected map element in each drawn map element of the to-be-detected map, and the corresponding drawn map element has the same category as the detected map element, determining that the detected map element is a newly-added map element, and correspondingly, determining first change state information corresponding to the detected map element in the to-be-detected scene includes: the information representing that the map element to be detected is a newly added map element, and the first change state information may further include: first spatial position information of the map element to be detected.
Subsequently, the electronic device determines, from the map drawing elements of the map to be detected, map drawing elements that are different from the detected map elements and have unchanged attributes, and determines that the map drawing elements are deleted map drawing elements, and correspondingly, determining the first change state information corresponding to the map drawing elements in the scene to be detected includes: information characterizing the deletion of the type of the drawn map element.
S103: and determining second change state information corresponding to the map elements in the scene to be detected by using the reference tracks and the semantic detection results corresponding to the reference tracks included in the plurality of reference data and the map to be detected.
In consideration of the fact that the first spatial position information of each detected map element determined by using a single reference data is not accurate enough compared with the fact that the second spatial position information of each detected map element is determined by using a plurality of reference data together, in the embodiment of the invention, the electronic equipment not only determines the first change state information corresponding to the map element in the scene to be detected corresponding to each reference data, but also determines the second change state information corresponding to the map element in the scene to be detected by using the reference track and the semantic detection result corresponding to the reference track included in the plurality of reference data and the map to be detected.
In this step, the electronic device may determine a crowd-sourced map based on the reference tracks included in the plurality of reference data and semantic detection results corresponding to the reference tracks, and further determine second change state information corresponding to map elements in the scene to be detected based on the crowd-sourced map and the map to be detected.
In one implementation, the reference trajectory includes a plurality of trajectory points; each track point corresponds to a semantic sub-detection result, and the semantic sub-detection result corresponding to each track point is as follows: determining a result based on an image acquired by image acquisition equipment of the corresponding vehicle when the vehicle is located at the track point; each semantic sub-detection result comprises a detected map element and image position information thereof;
the 021, can include the step:
and determining the crowdsourcing map by using the equipment pose information corresponding to each track point of the reference track and included in the plurality of reference data, and the detected map elements and the image position information thereof included in the semantic sub-detection result corresponding to each track point of the reference track.
Wherein, crowd-sourced map includes each map element of detecting and its second spatial position information, and the equipment position appearance information that each orbit point corresponds is: and when the corresponding vehicle is positioned at the track point, the pose information of the image acquisition equipment of the vehicle is acquired.
Accordingly, the 022 method may include the steps of:
and determining second change state information corresponding to the map elements in the scene to be detected based on the detected map elements and the second spatial position information thereof included in the crowd-sourced map, and the drawn map elements and the drawn spatial position information thereof included in the map to be detected.
In the implementation mode, for each detected map element, each track point and image position information corresponding to the detected map element can be determined based on semantic sub-detection results corresponding to each track point; determining device position information of the detected map element under a device coordinate system of the image acquisition device corresponding to the track point by using target image position information corresponding to the detected map element and a preset projection model of the image acquisition device corresponding to the target track point corresponding to the detected map element, and further determining spatial position information of the detected map element under a preset spatial rectangular coordinate system by using the device position information of the detected map element under the device coordinate system of the image acquisition device corresponding to the target track point and device pose information corresponding to the target track point; determining projection position information of the detected map element in an image corresponding to each other track point by utilizing spatial position information of the detected map element under a preset spatial rectangular coordinate system, device pose information corresponding to other track points except for a target track point in the track points corresponding to the detected map element and a preset projection model corresponding to the other track points; determining a reprojection error corresponding to the detected map element by using the projection position information of the detected map element in the image corresponding to each other track point and the image position information corresponding to each other track point corresponding to the detected map element; judging whether the reprojection error is smaller than a preset error, if so, determining that the spatial position information of the detected map element under a preset spatial rectangular coordinate system is second spatial position information; and if not, adjusting the value of the spatial position information of the detected map element in the preset spatial rectangular coordinate system, and returning to execute the step of determining the projection position information of the detected map element in the image corresponding to each other track point by using the spatial position information of the detected map element in the preset spatial rectangular coordinate system, the device pose information corresponding to other track points except the target track point in the track points corresponding to the detected map element and the preset projection model corresponding to the other track points.
The process of determining the reprojection error corresponding to the detected map element by using the projection position information of the detected map element in the image corresponding to each other track point and the image position information corresponding to each other track point corresponding to the detected map element may be: and calculating the distance between the projection position information and the image position information with the corresponding relationship, and determining the sum or the average value of the distances between all the projection position information and the image position information with the corresponding relationship as the re-projection error corresponding to the detected map element. It can be understood that the projection position information and the other track points corresponding to the detected map element have a corresponding relationship, and the image position information and the other track points corresponding to the detected map element have a corresponding relationship, so that the projection position information and the image position information have a corresponding relationship.
The position information of a target image corresponding to the detected map element is as follows: any image position information corresponding to the detected map element, wherein the target track point corresponding to the detected map element is as follows: and track points corresponding to the position information of the target image. The images corresponding to the other track points corresponding to the detected map elements are as follows: images of the vehicle captured at the other track points corresponding to the detected map elements.
The process of determining the second change state information corresponding to the map element in the scene to be detected based on the detected map elements and the second spatial position information thereof included in the crowd-sourced map and the drawn map elements and the drawn spatial position information thereof included in the map to be detected may refer to the above process of determining the first change state information corresponding to the map element in the scene to be detected corresponding to the reference data based on the detected map elements and the first spatial position information thereof in the current map corresponding to the reference data and the drawn map elements and the drawn spatial position information thereof in the map to be detected for each reference data, and details are not repeated here.
According to the device pose information corresponding to each track point of the reference track and the detected map elements and the image position information thereof included in the semantic sub-detection results corresponding to each track point of the reference track, which are included in the plurality of reference data, the accuracy of the determined second spatial position information of each detected map element is higher than the accuracy of the determined first spatial position information of each detected map element based on single reference data.
S104: and determining target change state information corresponding to the map elements in the scene to be detected based on the first change state information and the second change state information.
The first change state information can represent the change state of each map element in the scene to be detected, the second change state information can also represent the change state of each map element in the scene to be detected, and more accurate target change state information corresponding to the map elements in the scene to be detected can be determined based on the first change state information and the second change state information.
In this step, the electronic device may determine, based on the first change state information and the second change state information, whether both the first change state information and the second change state information represent that a map element in the scene to be detected does not change, and if both the first change state information and the second change state information represent that a map element in the scene to be detected does not change, determine that the target change state information corresponding to the map element in the scene to be detected includes: representing that the map element in the scene to be detected is not changed; if the first change state information and the second change state information both represent that a map element in the scene to be detected changes, and the change conditions are the same, determining that the target change state information corresponding to the map element in the scene to be detected includes: and representing that the map element in the scene to be detected changes.
By applying the embodiment of the invention, the accuracy of the corresponding information of each map element in the scene to be detected can be improved to a certain extent through the reference tracks included by the plurality of reference data and the semantic detection results corresponding to the reference tracks, and the second change state information corresponding to the map elements in the scene to be detected is determined by comparing the information with the map to be detected; and then, by combining the first change state information corresponding to the map element in the scene to be detected corresponding to each reference data and the second change state information corresponding to the map element in the scene to be detected, the target change state information corresponding to the map element in the scene to be detected is determined together, so that the accuracy of the determined change state information of the map element in the scene to be detected can be improved to a certain extent.
In another embodiment of the invention, the first change state information includes first change state information of map elements in the scene to be detected, which is determined based on semantic detection results corresponding to the respective reference data; the second change state information comprises second change state information of the map elements in the scene to be detected;
as shown in fig. 2, the S104 may include the following steps:
s201: counting the number of first change state information corresponding to each map element in a scene to be detected by using the first change state information as a first number; and counting the number of first change state information representing the change of the map element as a second number.
S202: and determining third change state information corresponding to each map element in the scene to be detected by using the first number and the second number corresponding to the map element.
S203: and determining target change state information corresponding to each map element in the scene to be detected by using the third change state information corresponding to the map element and the second change state information corresponding to the map element.
If the third change state information and the second change state information corresponding to the map element both represent that the map element has changed, and the change conditions are the same, determining that the target change state information corresponding to the map element is changed; and if the third change state information and the second change state information corresponding to the map element both represent that the map element is not changed, determining that the target change state information corresponding to the map element is unchanged.
In order to determine more accurate change state information for each map element in the scene to be detected corresponding to each determined reference data and simplify the first change state information corresponding to the map element in the scene to be detected corresponding to each determined reference data, the electronic device may count the number of the first change state information corresponding to the map element as a first number for each map element in the scene to be detected by using the first change state information; counting the number of first change state information representing the change of the map element as a second number; determining third change state information corresponding to the map element by using the first number and the second number corresponding to the map element; and then, for each map element in the scene to be detected, determining target change state information corresponding to the map element by using the third change state information corresponding to the map element and the second change state information corresponding to the map element.
The process of determining the third change state information corresponding to the map element by using the first number and the second number corresponding to the map element may be: and calculating a ratio between the second number and the first number to serve as a first ratio, and if the first ratio exceeds a first preset ratio, determining that third change state information corresponding to the map element is state information including information representing that the map element changes. Conversely, the third change state information corresponding to the map element may be determined as state information including information that represents that the map element has not changed. It can also be: when the first ratio is determined not to exceed the first preset ratio, calculating the difference between the first number and the second number as a third number, wherein the third number is the number of first change state information representing that the map elements are not changed; calculating the ratio of the third number to the first number as a second ratio; and if the second ratio exceeds a second preset ratio, determining that the third change state information corresponding to the map element is state information including information representing that the map element is not changed.
In one implementation, the first change state information may also be clustered for different change categories, where the change categories may include, but are not limited to: representing the map elements as the categories of the newly added map elements; characterizing the map element as a category of deleted map elements; a category characterizing a change in position of a map element; representing the category of the map element which is not changed, wherein the category of the map element is the category of the lane line and the attribute of the map element is changed; the category representing the map elements is a traffic signboard, the category of which the direction changes, and the like. In one case, after the first change state information is clustered into different change categories, the clustering result can be stored in a preset storage space for subsequent checking by a worker.
Corresponding to the method embodiment, the embodiment of the invention provides a device for detecting the change elements of the scene corresponding to the navigation map; as shown in fig. 3, may include:
an obtaining module 310, configured to obtain a map to be detected and a plurality of reference data corresponding to the map to be detected, where each reference data includes: the method comprises the following steps of (1) referring to a track and a semantic detection result corresponding to the track, wherein the reference track is a track generated by a vehicle running in a to-be-detected scene corresponding to a to-be-detected map;
a first determining module 320, configured to determine, for each reference data, first change state information corresponding to a map element in the scene to be detected corresponding to the reference data based on a reference track included in the reference data and a semantic detection result corresponding to the reference track and the map to be detected;
a second determining module 330, configured to determine, by using the reference tracks included in the plurality of reference data and the semantic detection results corresponding to the reference tracks, second change state information corresponding to map elements in the scene to be detected, with the map to be detected;
a third determining module 340 configured to determine, based on the first change state information and the second change state information, target change state information corresponding to a map element in the scene to be detected.
By applying the embodiment of the invention, the accuracy of the corresponding information of each map element in the scene to be detected can be improved to a certain extent through the reference tracks included by the plurality of reference data and the semantic detection results corresponding to the reference tracks, and the second change state information corresponding to the map elements in the scene to be detected is determined by comparing the information with the map to be detected; and then, by combining the first change state information corresponding to the map element in the scene to be detected corresponding to each reference data and the second change state information corresponding to the map element in the scene to be detected, the target change state information corresponding to the map element in the scene to be detected is determined together, so that the accuracy of the determined change state information of the map element in the scene to be detected can be improved to a certain extent.
In another embodiment of the present invention, the first determining module 320 includes:
a first determining unit (not shown in the figure) configured to determine, for each reference data, a current map corresponding to the reference data based on a reference track included in the reference data and a semantic detection result corresponding to the reference track;
and a second determining unit (not shown in the figure) configured to, for each reference data, compare the current map corresponding to the reference data with the map to be detected, and determine first change state information corresponding to a map element in the scene to be detected corresponding to the reference data.
In another embodiment of the invention, the reference trajectory comprises a plurality of trajectory points; each track point corresponds to a semantic sub-detection result, and the semantic sub-detection result corresponding to each track point is as follows: determining a result based on an image acquired by the image acquisition equipment of the corresponding vehicle at the position of the track point; each semantic sub-detection result comprises a detected map element and image position information thereof;
the first determining unit is configured to obtain, for each reference data, device pose information corresponding to each track point in a reference track included in the reference data, where the device pose information corresponding to the track point is: acquiring the device pose information of the image acquisition device of the corresponding vehicle at the position of the track point;
and for each piece of reference data, determining first spatial position information of each detected map element based on equipment pose information corresponding to each track point of a reference track contained in the reference data and detected map elements and image position information thereof contained in semantic sub-detection results corresponding to each track point, and obtaining a current map corresponding to the reference data containing each detected map element and the first spatial position information thereof.
In another embodiment of the invention, the map to be detected comprises drawing map elements and drawing space position information thereof; the map elements in the scene to be detected comprise: the detected map element and the drawing map element included in the map to be detected;
the second determining unit is configured to determine, for each reference data, first change state information corresponding to each detected map element and first spatial position information thereof in the current map corresponding to the reference data, and first change state information corresponding to each drawn map element and drawing spatial position information thereof in the to-be-detected map corresponding to the reference data.
Optionally, the reference track includes a plurality of track points; each track point corresponds to a semantic sub-detection result, and the semantic sub-detection result corresponding to each track point is as follows: determining a result based on an image acquired by image acquisition equipment of the corresponding vehicle when the vehicle is located at the track point; each semantic sub-detection result comprises a detected map element and image position information thereof;
the second determining module 330 is specifically configured to determine a crowd-sourced map by using the device pose information corresponding to each track point of the reference track included in the plurality of reference data, and the detected map elements and the image position information thereof included in the semantic sub-detection result corresponding to each track point of the reference track, where the crowd-sourced map includes the detected map elements and the second spatial position information thereof, and the device pose information corresponding to each track point is: when the corresponding vehicle is positioned at the track point, the pose information of the image acquisition equipment of the vehicle is acquired;
and determining second change state information corresponding to the map elements in the scene to be detected based on the detected map elements and the second spatial position information thereof included in the crowd-sourced map, and the drawn map elements and the drawn spatial position information thereof included in the map to be detected.
Optionally, the first change state information includes first change state information of a map element in the scene to be detected, which is determined based on semantic detection results corresponding to the respective reference data; the second change state information comprises second change state information of the map elements in the scene to be detected;
the third determining module 340 is specifically configured to, for each map element in the scene to be detected, count, by using the first change state information, the number of first change state information corresponding to the map element, as a first number; counting the number of first change state information representing the change of the map element as a second number;
determining third change state information corresponding to each map element in the scene to be detected by using the first number and the second number corresponding to the map element;
for each map element in the scene to be detected, determining target change state information corresponding to the map element by using third change state information corresponding to the map element and second change state information corresponding to the map element; if the third change state information and the second change state information corresponding to the map element both represent that the map element has changed, and the change conditions are the same, determining that the target change state information corresponding to the map element is changed; and if the third change state information and the second change state information corresponding to the map element both represent that the map element is not changed, determining that the target change state information corresponding to the map element is unchanged.
The device and system embodiments correspond to the method embodiments, and have the same technical effects as the method embodiments, and specific descriptions refer to the method embodiments. The device embodiment is obtained based on the method embodiment, and for specific description, reference may be made to the method embodiment section, which is not described herein again.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for detecting a change element of a scene corresponding to a navigation map is characterized by comprising the following steps:
obtaining a map to be detected and a plurality of reference data corresponding to the map to be detected, wherein each reference data comprises: the method comprises the following steps of (1) referring to a track and a semantic detection result corresponding to the track, wherein the reference track is a track generated by a vehicle running in a to-be-detected scene corresponding to a to-be-detected map;
for each reference datum, determining first change state information corresponding to a map element in the scene to be detected corresponding to the reference datum based on a reference track contained in the reference datum and a semantic detection result corresponding to the reference track and the map to be detected;
determining second change state information corresponding to map elements in the scene to be detected by using the reference tracks and the semantic detection results corresponding to the reference tracks included in the plurality of reference data and the map to be detected;
and determining target change state information corresponding to the map element in the scene to be detected based on the first change state information and the second change state information.
2. The method according to claim 1, wherein the step of determining, for each reference datum, first change state information corresponding to a map element in the scene to be detected corresponding to the reference datum based on a reference track included in the reference datum and a semantic detection result corresponding to the reference track and the map to be detected comprises:
for each piece of reference data, determining a current map corresponding to the reference data based on a reference track contained in the reference data and a semantic detection result corresponding to the reference track;
and for each reference datum, determining first change state information corresponding to a map element in the scene to be detected corresponding to the reference datum based on the current map corresponding to the reference datum and the map to be detected.
3. The method of claim 2, wherein the reference trajectory comprises a plurality of trajectory points; each track point corresponds to a semantic sub-detection result, and the semantic sub-detection result corresponding to each track point is as follows: determining a result based on an image acquired by the image acquisition equipment of the corresponding vehicle at the position of the track point; each semantic sub-detection result comprises a detected map element and image position information thereof;
the step of determining the current map corresponding to each reference data based on the reference track contained in the reference data and the semantic detection result corresponding to the reference track comprises:
and aiming at each reference data, acquiring equipment pose information corresponding to each track point in a reference track contained in the reference data, wherein the equipment pose information corresponding to the track point is as follows: acquiring the device pose information of the image acquisition device of the corresponding vehicle at the position of the track point;
and for each piece of reference data, determining first spatial position information of each detected map element based on equipment pose information corresponding to each track point of a reference track contained in the reference data and detected map elements and image position information thereof contained in semantic sub-detection results corresponding to each track point, and obtaining a current map corresponding to the reference data containing each detected map element and the first spatial position information thereof.
4. The method according to claim 3, wherein the map to be detected comprises a drawing map element and drawing spatial position information thereof; the map elements in the scene to be detected comprise: the detected map element and the drawing map element included in the map to be detected;
the step of determining, for each reference data, first change state information corresponding to a map element in the scene to be detected corresponding to the reference data based on the current map corresponding to the reference data and the map to be detected includes:
and for each reference datum, determining first change state information corresponding to the map element in the scene to be detected corresponding to the reference datum based on each detected map element and first spatial position information thereof in the current map corresponding to the reference datum, and each drawn map element and drawing spatial position information thereof in the map to be detected.
5. The method of any one of claims 1-4, wherein the reference trajectory comprises a plurality of trajectory points; each track point corresponds to a semantic sub-detection result, and the semantic sub-detection result corresponding to each track point is as follows: determining a result based on an image acquired by image acquisition equipment of the corresponding vehicle when the vehicle is located at the track point; each semantic sub-detection result comprises a detected map element and image position information thereof;
the step of determining second change state information corresponding to map elements in the scene to be detected by using the reference tracks and the semantic detection results corresponding to the reference tracks included in the plurality of reference data and the map to be detected includes:
determining a crowd-sourced map by using the device pose information corresponding to each track point of the reference track and included by the plurality of reference data, and the detected map elements and the image position information thereof included by the semantic sub-detection result corresponding to each track point of the reference track, wherein the crowd-sourced map includes the detected map elements and the second spatial position information thereof, and the device pose information corresponding to each track point is as follows: when the corresponding vehicle is positioned at the track point, the pose information of the image acquisition equipment of the vehicle is acquired;
and determining second change state information corresponding to the map elements in the scene to be detected based on the detected map elements and the second spatial position information thereof included in the crowd-sourced map, and the drawn map elements and the drawn spatial position information thereof included in the map to be detected.
6. The method according to any one of claims 1 to 5, wherein the first change state information includes first change state information of a map element in the scene to be detected, which is determined based on semantic detection results corresponding to respective reference data; the second change state information comprises second change state information of the map elements in the scene to be detected;
the step of determining target change state information corresponding to the map element in the scene to be detected based on the first change state information and the second change state information includes:
counting the number of first change state information corresponding to each map element in the scene to be detected by using the first change state information as a first number; counting the number of first change state information representing the change of the map element as a second number;
determining third change state information corresponding to each map element in the scene to be detected by using the first number and the second number corresponding to the map element;
for each map element in the scene to be detected, determining target change state information corresponding to the map element by using third change state information corresponding to the map element and second change state information corresponding to the map element; if the third change state information and the second change state information corresponding to the map element both represent that the map element has changed, and the change conditions are the same, determining that the target change state information corresponding to the map element is changed; and if the third change state information and the second change state information corresponding to the map element both represent that the map element is not changed, determining that the target change state information corresponding to the map element is unchanged.
7. A device for detecting a change element of a scene corresponding to a navigation map, the device comprising:
an obtaining module configured to obtain a map to be detected and a plurality of reference data corresponding to the map to be detected, wherein each reference data includes: the method comprises the following steps of (1) referring to a track and a semantic detection result corresponding to the track, wherein the reference track is a track generated by a vehicle running in a to-be-detected scene corresponding to a to-be-detected map;
the first determining module is configured to determine, for each piece of reference data, first change state information corresponding to a map element in the scene to be detected corresponding to the reference data based on a reference track contained in the reference data and a semantic detection result corresponding to the reference track and the map to be detected;
the second determining module is configured to determine second change state information corresponding to map elements in the scene to be detected by using the reference tracks and the semantic detection results corresponding to the reference tracks included in the plurality of reference data and the map elements to be detected;
a third determining module configured to determine, based on the first change state information and the second change state information, target change state information corresponding to a map element in the scene to be detected.
8. The apparatus of claim 7, wherein the first determining module comprises:
the first determining unit is configured to determine, for each piece of reference data, a current map corresponding to the reference data based on a reference track contained in the reference data and a semantic detection result corresponding to the reference track;
the second determining unit is configured to compare the current map corresponding to each reference data with the map to be detected, and determine first change state information corresponding to map elements in the scene to be detected corresponding to the reference data.
9. The apparatus of claim 8, wherein the reference trajectory comprises a plurality of trajectory points; each track point corresponds to a semantic sub-detection result, and the semantic sub-detection result corresponding to each track point is as follows: determining a result based on an image acquired by the image acquisition equipment of the corresponding vehicle at the position of the track point; each semantic sub-detection result comprises a detected map element and image position information thereof;
the first determining unit is configured to obtain, for each reference data, device pose information corresponding to each track point in a reference track included in the reference data, where the device pose information corresponding to the track point is: acquiring the device pose information of the image acquisition device of the corresponding vehicle at the position of the track point;
and for each piece of reference data, determining first spatial position information of each detected map element based on equipment pose information corresponding to each track point of a reference track contained in the reference data and detected map elements and image position information thereof contained in semantic sub-detection results corresponding to each track point, and obtaining a current map corresponding to the reference data containing each detected map element and the first spatial position information thereof.
10. The apparatus of claim 9, wherein the map to be detected includes a drawing map element and drawing spatial position information thereof; the map elements in the scene to be detected comprise: the detected map element and the drawing map element included in the map to be detected;
the second determining unit is configured to determine, for each reference data, first change state information corresponding to each detected map element and first spatial position information thereof in the current map corresponding to the reference data, and first change state information corresponding to each drawn map element and drawing spatial position information thereof in the to-be-detected map corresponding to the reference data.
CN201911370525.3A 2019-12-26 2019-12-26 Method and device for detecting change elements of scene corresponding to navigation map Active CN113048988B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911370525.3A CN113048988B (en) 2019-12-26 2019-12-26 Method and device for detecting change elements of scene corresponding to navigation map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911370525.3A CN113048988B (en) 2019-12-26 2019-12-26 Method and device for detecting change elements of scene corresponding to navigation map

Publications (2)

Publication Number Publication Date
CN113048988A true CN113048988A (en) 2021-06-29
CN113048988B CN113048988B (en) 2022-12-23

Family

ID=76505697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911370525.3A Active CN113048988B (en) 2019-12-26 2019-12-26 Method and device for detecting change elements of scene corresponding to navigation map

Country Status (1)

Country Link
CN (1) CN113048988B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762397A (en) * 2021-09-10 2021-12-07 北京百度网讯科技有限公司 Detection model training and high-precision map updating method, device, medium and product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106767812A (en) * 2016-11-25 2017-05-31 梁海燕 A kind of interior semanteme map updating method and system based on Semantic features extraction
CN109641538A (en) * 2016-07-21 2019-04-16 国际智能技术公司 It is created using vehicle, updates the system and method for map
CN110146097A (en) * 2018-08-28 2019-08-20 北京初速度科技有限公司 Method and system for generating automatic driving navigation map, vehicle-mounted terminal and server
CN110160544A (en) * 2019-06-12 2019-08-23 北京深思敏行科技有限责任公司 A kind of high-precision map crowdsourcing more new system based on edge calculations
CN110287276A (en) * 2019-05-27 2019-09-27 百度在线网络技术(北京)有限公司 High-precision map updating method, device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109641538A (en) * 2016-07-21 2019-04-16 国际智能技术公司 It is created using vehicle, updates the system and method for map
CN106767812A (en) * 2016-11-25 2017-05-31 梁海燕 A kind of interior semanteme map updating method and system based on Semantic features extraction
US20180150693A1 (en) * 2016-11-25 2018-05-31 Deke Guo Indoor semantic map updating method and system based on semantic information extraction
CN110146097A (en) * 2018-08-28 2019-08-20 北京初速度科技有限公司 Method and system for generating automatic driving navigation map, vehicle-mounted terminal and server
CN110287276A (en) * 2019-05-27 2019-09-27 百度在线网络技术(北京)有限公司 High-precision map updating method, device and storage medium
CN110160544A (en) * 2019-06-12 2019-08-23 北京深思敏行科技有限责任公司 A kind of high-precision map crowdsourcing more new system based on edge calculations

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762397A (en) * 2021-09-10 2021-12-07 北京百度网讯科技有限公司 Detection model training and high-precision map updating method, device, medium and product
CN113762397B (en) * 2021-09-10 2024-04-05 北京百度网讯科技有限公司 Method, equipment, medium and product for training detection model and updating high-precision map

Also Published As

Publication number Publication date
CN113048988B (en) 2022-12-23

Similar Documents

Publication Publication Date Title
US11105638B2 (en) Method, apparatus, and computer readable storage medium for updating electronic map
CN110146097B (en) Method and system for generating automatic driving navigation map, vehicle-mounted terminal and server
CN110954112B (en) Method and device for updating matching relation between navigation map and perception image
CN111912416B (en) Method, device and equipment for positioning equipment
CN113034566B (en) High-precision map construction method and device, electronic equipment and storage medium
US20220011117A1 (en) Positioning technology
CN112154445A (en) Method and device for determining lane line in high-precision map
CN104819726A (en) Navigation data processing method, navigation data processing device and navigation terminal
CN111750882B (en) Method and device for correcting vehicle pose during initialization of navigation map
CN115164918B (en) Semantic point cloud map construction method and device and electronic equipment
CN114459471B (en) Positioning information determining method and device, electronic equipment and storage medium
CN112650772B (en) Data processing method, data processing device, storage medium and computer equipment
CN111507204A (en) Method and device for detecting countdown signal lamp, electronic equipment and storage medium
CN111768498A (en) Visual positioning method and system based on dense semantic three-dimensional map and mixed features
CN114639085A (en) Traffic signal lamp identification method and device, computer equipment and storage medium
CN114419922B (en) Parking space identification method and device
CN116484036A (en) Image recommendation method, device, electronic equipment and computer readable storage medium
CN113048988B (en) Method and device for detecting change elements of scene corresponding to navigation map
CN111754388B (en) Picture construction method and vehicle-mounted terminal
CN113758492A (en) Map detection method and device
CN116295463A (en) Automatic labeling method for navigation map elements
CN114743395A (en) Signal lamp detection method, device, equipment and medium
CN114791282A (en) Road facility coordinate calibration method and device based on vehicle high-precision positioning
CN113063426B (en) Position information determining method and device
CN111488771B (en) OCR hooking method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant