CN111597986A - Method, apparatus, device and storage medium for generating information - Google Patents

Method, apparatus, device and storage medium for generating information Download PDF

Info

Publication number
CN111597986A
CN111597986A CN202010411190.1A CN202010411190A CN111597986A CN 111597986 A CN111597986 A CN 111597986A CN 202010411190 A CN202010411190 A CN 202010411190A CN 111597986 A CN111597986 A CN 111597986A
Authority
CN
China
Prior art keywords
traffic indicator
image
information
precision map
traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010411190.1A
Other languages
Chinese (zh)
Other versions
CN111597986B (en
Inventor
何雷
杨光垚
沈莉霞
宋适宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010411190.1A priority Critical patent/CN111597986B/en
Publication of CN111597986A publication Critical patent/CN111597986A/en
Application granted granted Critical
Publication of CN111597986B publication Critical patent/CN111597986B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for generating information, and relates to the field of automatic driving. The specific implementation scheme is as follows: segmenting a traffic indicator image from the target image; acquiring camera attitude information corresponding to the target image; according to the camera attitude information, projecting high-precision map data matched with the target image to a plane where the target image is located to generate a projected image, wherein the matched high-precision map data comprises three-dimensional data of a traffic indicator with the position and the orientation meeting preset requirements; and generating traffic indicator change information based on the comparison between the traffic indicator image and the projection image, wherein the traffic indicator change information is used for indicating whether the traffic indicator corresponding to the matched high-precision map data is changed. Therefore, whether the traffic indicators presented by the high-precision map data are changed actually or not is judged quickly and timely in a low-cost mode, and a solid data base is provided for creating minute-level high-precision map automatic updating.

Description

Method, apparatus, device and storage medium for generating information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a high-precision map change detection technology in the field of automatic driving.
Background
With the development of the automatic driving technology, the core elements (such as traffic lights and the like) in the high-precision map play a significant role in ensuring the timeliness of the high-precision map and the safety of an automatic driving system along with the change of the actual situation.
The prior art generally utilizes a special map collecting vehicle to quickly cover a main road and transmit collected data back. The collected point cloud and the collected image are analyzed and processed, background fusion is carried out on target elements on the road by combining positioning data, and global information of the high-precision map is constructed through each piece of local information. However, the method has the problems of long acquisition period, long drawing period, high manufacturing cost and the like.
Disclosure of Invention
A method, apparatus, device, and storage medium for generating information are provided.
According to a first aspect, there is provided a method for generating information, the method comprising: segmenting a traffic indicator image from the target image; acquiring camera attitude information corresponding to a target image; according to the camera attitude information, projecting high-precision map data matched with the target image to a plane where the target image is located to generate a projected image, wherein the matched high-precision map data comprises three-dimensional data of a traffic indicator with the position and the orientation meeting preset requirements; generating traffic indicator change information based on the comparison of the traffic indicator image and the projected image, wherein the traffic indicator change information is used for indicating whether the traffic indicator corresponding to the matched high-precision map data is changed or not, and the traffic indicator change information is used for indicating at least one of the following items: the traffic indicator is increased, the traffic indicator is decreased, and the traffic indicator is not changed.
According to a second aspect, there is provided an apparatus for generating information, the apparatus comprising: a segmentation unit configured to segment a traffic indicator image from the target image; a first acquisition unit configured to acquire camera pose information corresponding to a target image; the projection unit is configured to project high-precision map data matched with the target image to a plane where the target image is located according to the camera posture information to generate a projection image, wherein the matched high-precision map data comprises three-dimensional data of a traffic indicator with the position and the orientation meeting preset requirements; a generating unit configured to generate traffic indicator change information based on a comparison of the traffic indicator image and the projected image, wherein the traffic indicator change information is used for indicating whether a traffic indicator corresponding to the matched high-precision map data is changed, and the traffic indicator change information is used for indicating at least one of the following: the traffic indicator is increased, the traffic indicator is decreased, and the traffic indicator is not changed.
According to a third aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the method as described in any one of the implementations of the first aspect.
According to a fourth aspect, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for enabling a computer to perform the method as described in any one of the implementations of the first aspect.
According to the technology of the application, whether the traffic indicators (such as traffic lights and the like) presented by the high-precision map data are changed in reality or not is judged quickly and timely in a low-cost mode, and the method has good generalization. And further, a solid data base can be provided for the automatic updating of the created minute-level high-precision map. Therefore, the problems of long acquisition period, long drawing period, high manufacturing cost and the like of the conventional high-precision map updating method are solved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present application;
FIG. 2 is a schematic diagram according to a second embodiment of the present application;
FIG. 3 is a schematic diagram of an application scenario in which a method for generating information according to an embodiment of the present application may be implemented;
FIG. 4 is a schematic diagram of an apparatus for generating information according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device for implementing a method for generating information according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram 100 illustrating a first embodiment according to the present application. The method for generating information comprises the following steps:
and S101, segmenting the traffic indicator image from the target image.
In the present embodiment, the execution subject for generating information may segment the traffic indicator image from the target image in various ways. The target image may include an image acquired from an onboard camera. The target image may generally include various traffic indicator images. The traffic indicators may include, but are not limited to, at least one of: traffic signal lights of motor vehicle lanes, traffic signal lights of sidewalks, speed limit boards and other traffic sign boards. The above-mentioned method of image segmentation may include, but is not limited to, at least one of: the method comprises a segmentation method based on a threshold value, a watershed algorithm, a segmentation method based on edge detection, an image segmentation method based on wavelet analysis and wavelet transformation, a segmentation method based on an Active contour model (Active contour models) and a segmentation model based on deep learning.
In some optional implementations of this embodiment, the executing subject may further input the target image into a traffic indicator segmentation model trained in advance, and generate a segmentation result including at least one traffic indicator image. The traffic indicator segmentation model may include an encoding network and a decoding network based on a hole separable convolution. The traffic indicator segmentation model may use Xception as a backbone network, and an empty space Pyramid Pooling (ASPP) module is added on the basis of an original coding network and a decoding network, so that convolution characteristics on multiple scales can be obtained. The traffic indicator segmentation model can use a depth separable convolution structure (depthwiseparable convolution), so that not only can network parameters be reduced, but also the robustness of network inference can be improved. In practice, the deep neural network structure of deep lab v3+ may be used as an initial model, and a machine learning algorithm is used to train the deep neural network structure with a preset training sample set, so as to obtain the traffic indicator segmentation model.
And S102, acquiring camera attitude information corresponding to the target image.
In this embodiment, the executing body may acquire the camera pose information corresponding to the target image in various ways. The camera pose information corresponding to the target image may be a pose of the vehicle-mounted camera. As an example, the execution subject may acquire the camera Pose information by various methods of Pose Estimation (position Estimation). The above-described methods of pose estimation may include, for example, but are not limited to, feature-based methods and direct matching methods.
And S103, projecting the high-precision map data matched with the target image to a plane where the target image is located according to the camera attitude information to generate a projected image.
In this embodiment, according to the camera pose information acquired in S102, the execution subject may project high-precision map data matched with the target image onto a plane where the target image is located, and generate a projection image. The matched high-precision map data can comprise three-dimensional data of the traffic indicator with the position and the orientation meeting preset requirements. The matched high-precision map data may be high-precision map data including data that coincides with the traffic indicator indicated by the target image. The preset requirement can be preset according to the actual application scene. For example, the preset requirement may be that the distance in front of the vehicle in the traveling direction is not more than 200 meters. Optionally, the preset requirement may further include correspondence between the position of the traffic indicator and the vehicle lane to exclude interference of traffic markers of the non-vehicle lane.
In the present embodiment, the execution body may determine the matching high-accuracy map data from the positioning data and the coordinate and orientation information included in the high-accuracy map. The positioning data may be acquired in various ways, such as from EXIF (Exchangeable image file format) information of the target image or from a vehicle positioning system corresponding to the onboard camera. Since the high-precision map data often includes three-dimensional data corresponding to a point cloud, the execution subject may generate the projection image by projecting the high-precision map data matched with the target image according to the coordinate transformation matrix indicated by the camera posture information acquired in S102.
In some optional implementations of the present embodiment, the executing subject may generate the projection image according to the following steps:
firstly, acquiring shooting direction and position information corresponding to a target image.
In these implementations, the execution body may acquire the shooting direction and the position information corresponding to the target image in various ways. As an example, the execution body described above may acquire position information from the positioning data. Then, trajectory information may be generated from the positioning data, and a direction of travel may be generated from the trajectory information. Thereafter, the execution body may determine a direction that coincides with the traveling direction as a photographing direction.
And secondly, selecting high-precision map data matched with the shooting direction and the position information from preset high-precision map data as a candidate data set by using a pre-constructed high-dimensional index tree structure.
In these implementations, the execution subject may search high-precision map data that matches the shooting direction and position information acquired in the first step as a candidate data set using a high-dimensional index tree structure that is constructed in advance. The high-dimensional index tree structure may include a high-precision map data query database indexed according to the trajectory and the camera pose. The high-dimensional index tree structure may include, for example, a K-D tree (K-dimensional tree).
And thirdly, projecting the candidate data set to a plane where the target image is located according to the camera attitude information to generate a projected image.
In these implementations, the executing entity may generate a projection image by projecting the candidate data set selected in the second step onto a plane where the target image is located according to the coordinate transformation matrix indicated by the camera pose information acquired in S102.
Based on the optional implementation manner, the execution main body can quickly screen the matched high-precision map data through a pre-constructed high-dimensional index tree data structure, so that the time complexity of retrieval can be effectively reduced, and a data basis can be provided for the subsequent generation of quick and accurate traffic indicator change.
In some optional implementation manners of this embodiment, after the executing entity projects the matched high-precision map data onto a plane where the target image is located, the executing entity performs post-processing on the projected traffic indicator image, so that the post-processed image is used as the projected image. Wherein the post-treatment may include, but is not limited to, at least one of: area expansion, dilution of points on the contour curve, squaring treatment of the points on the contour curve, deletion of repeated point lines and the like.
Based on the optional implementation mode, the projected traffic indicator image can be optimized and corrected, for example, irregular and unreasonable conditions of regular geometry are avoided, and waste of storage space caused by excessive data redundancy can be reduced.
And S104, generating traffic indicator change information based on the comparison between the traffic indicator image and the projection image.
In this embodiment, the execution body may generate the traffic indicator change information in various ways based on the comparison between the traffic indicator image and the projection image obtained in S103. The traffic indicator change information may be used to indicate whether or not the traffic indicator corresponding to the matched high-accuracy map data is changed. The traffic indicator change information may be used to indicate at least one of: the traffic indicator is increased, the traffic indicator is decreased, and the traffic indicator is not changed.
As an example, in response to determining that there is a traffic indicator in the traffic indicator image that matches the traffic indicator (e.g., traffic light) indicated by the projection image obtained in S103, the execution subject may generate traffic indicator change information indicating that the traffic indicator is not changed, that is, high-precision map data that matches the actual situation. As still another example, in response to determining that there is no traffic indicator in the traffic indicator image that matches the traffic indicator (e.g., traffic light) indicated in the projection image obtained in S103, the executing body may generate traffic indicator change information indicating that the traffic indicator is reduced, that is, a traffic indicator that corresponds to the traffic indicator in the high-precision map data.
In some optional implementations of the embodiment, the executing subject may further use a grid searching (gridding) method to determine whether the traffic indicator indicated by the traffic indicator image is consistent with the traffic indicator indicated by the projection image obtained in step 103.
In some optional implementations of this embodiment, the executing body may further generate the traffic indicator change information by:
the method comprises the first step of responding to the fact that no traffic indicator image exists in a projection image, inputting the traffic indicator image to a pre-trained traffic indicator fine classification model, and generating the classification information of the traffic indicator.
In these implementations, in response to determining that the traffic indicator image is not present in the projection image, the execution subject may input the traffic indicator image to a pre-trained fine traffic indicator classification model to generate the classification information to which the traffic indicator belongs. The category information can be used for indicating whether the traffic indicator is displayed in the high-precision map. The pre-trained fine classification model of the traffic indicators can comprise various neural networks trained through a machine learning mode. The training sample set of the traffic indicator fine classification model may include positive samples and negative samples. The positive sample may include a traffic indicator image (e.g., a traffic light, a speed limit sign, etc.) of a motor lane and category information for indicating that the traffic indicator is displayed in a high-precision map. The negative examples may include traffic indicator images of non-motor lanes (e.g., non-motor lights, crosswalk lights, no motor entry signs, etc.) and category information indicating that the traffic indicators are not displayed in the high-precision map.
Alternatively, the traffic indicator fine classification model may be a separately trained model, or may be a network layer near the output end in the traffic indicator segmentation model in the optional implementation manner of S101, which is not limited herein.
In response to determining that the generated category information indicates that the traffic indicator is displayed in the high-precision map, generating traffic indicator change information indicating that the traffic indicator is increased.
In these implementations, in response to determining that the generated category information indicates that the traffic indicator is displayed in the high-precision map, the execution subject may generate traffic indicator change information indicating that the traffic indicator is increased, that is, that the corresponding traffic indicator is missing from the high-precision map data.
Third, in response to determining that the generated category information indicates that the traffic indicator is not displayed in the high-precision map, traffic indicator change information indicating that the traffic indicator is not changed is generated.
In these implementations, in response to determining that the generated category information is for indicating that the traffic indicator is not displayed in the high-precision map, the execution subject may generate traffic indicator change information for indicating that the traffic indicator is not changed, i.e., the high-precision map data is consistent with the actual situation.
Based on the optional implementation manner, whether the generated traffic indicator change information meets the acquisition and production requirements of the high-precision map can be further determined through the traffic indicator fine classification model, so that the accuracy of the generated traffic indicator change information for indicating the high-precision map missing traffic indicator can be further improved.
In some optional implementations of this embodiment, the executing body may further execute the following steps:
in a first step, in response to generation of traffic indicator change information indicating an increase in traffic indicators, supplementary data associated with the matched high-precision map data is acquired.
In these implementations, the execution body may further acquire supplementary data associated with the matched high-precision map data in response to generating traffic indicator change information indicating an increase in traffic indicators. The supplementary data may include high-precision map data that matches the position of the target image. The supplementary data may generally include high-precision map data that does not completely satisfy the preset requirements. As an example, the above-mentioned preset requirement is that the distance in front of the vehicle traveling direction is not more than 200 meters. The supplementary data may typically include high-precision map data located within a distance of no more than 200 meters ahead of the vehicle in the traveling direction (offset left and right by no more than 20 °).
And a second step of changing the generated traffic indicator change information indicating an increase in the traffic indicator to traffic indicator change information indicating no change in the traffic indicator in response to determining that there is three-dimensional data of the traffic indicator in the supplementary data that matches the traffic indicator image indicating an increase in the traffic indicator in the target image.
Based on the optional implementation manner, the matching target of the traffic indicator can be further expanded by increasing the data volume of the matched high-precision map, so that the accuracy of the generated traffic indicator change information for indicating the high-precision map missing traffic indicator can be further improved.
The method provided by the above embodiment of the application generates the traffic indicator change information for indicating whether the traffic indicator is changed by projecting the high-precision map data matched with the target image onto the plane where the target image is located and comparing the high-precision map data with the plane where the target image is located, so that whether the traffic indicator (for example, a traffic light) presented by the high-precision map data is actually changed or not can be quickly and timely determined in a low-cost manner, and the method has good generalization. And further, a solid data base can be provided for the automatic updating of the created minute-level high-precision map.
With continued reference to fig. 2, fig. 2 is a schematic diagram 200 of a second embodiment according to the present application. The method for generating information comprises the following steps:
s201, segmenting a traffic indicator image from the target image.
S202, acquiring camera attitude information corresponding to the target image.
And S203, projecting the high-precision map data matched with the target image to a plane where the target image is located according to the camera attitude information to generate a projected image.
S201, S202, and S203 are respectively consistent with S101, S102, and S103 and their optional implementations in the foregoing embodiments, and the above description on S101, S102, and S103 and their optional implementations also applies to S201, S202, and S203, which is not described herein again.
And S204, generating the traffic indicator change sub-information according to the comparison between the traffic indicator image and the projection image.
In this embodiment, the executing body of the method for generating information may generate the above-mentioned traffic indicator change sub-information in a manner consistent with the method described in S104 and its optional implementation in the foregoing embodiment. The traffic indicator change sub-information may be used to indicate whether or not the traffic indicator corresponding to the matched high-accuracy map data is changed.
S205, acquiring the target number of the expansion images related to the target image.
In this embodiment, the execution main body may acquire the target number of the extension images associated with the target image in various ways. The high-precision map data matched with the extended image is generally consistent with the high-precision map data matched with the target image. As an example, the execution subject may acquire the extended image associated with the target image from an in-vehicle camera that captures the target image. For example, a vehicle equipped with the onboard camera may continuously capture images while traveling, and the extended image associated with the target image may be several images adjacent to the target image in an image sequence.
And S206, dividing the traffic indicator image from the target number of expansion images.
In this embodiment, the executing body may segment the traffic indicator image from the target number of extended images in a manner consistent with the method described in S101 in the foregoing embodiment.
And S207, generating traffic indicator change sub-information of the number of the targets according to the comparison between the traffic indicator images and the projection images which are divided from the number of the targets of the expansion images.
In this embodiment, the executing agent may generate the target number of traffic indicator change sub-information in a manner consistent with the methods described in S102 to S104 and their optional implementations in the foregoing embodiments.
S208, the generated plurality of pieces of traffic guidance change sub-information are counted to generate traffic guidance change information.
In this embodiment, the execution subject may count the generated traffic indicator sub-information in various ways. As an example, the execution body may count the number of the traffic indicator change sub information indicating the change of the traffic indicator and the number of the traffic indicator change sub information indicating that the traffic indicator is not changed, respectively. The execution body may generate the traffic sign change information corresponding to the traffic sign change indicated by the traffic sign change sub information having a large number. As still another example, the execution main body may further determine whether a ratio of the number of the traffic indicator change sub information indicating the traffic indicator change to the target number is greater than a preset ratio threshold. In response to determining that the determination is greater than the threshold, the performing body may generate traffic indicator change information indicating a change in the traffic indicator. In response to determining that the traffic indicator is not greater than the predetermined threshold, the performing body may generate traffic indicator alteration information indicating that the traffic indicator is not altered.
In some optional implementations of this embodiment, in response to determining that the number of the traffic indicator change sub information indicating whether the traffic indicator is changed does not satisfy a preset condition for generating whether the traffic indicator is changed, the executing body may further characterize the traffic indicator change information that is not determined whether the traffic indicator is changed. In these implementations, the execution subject may send information indicating manual takeover or re-execute the method for generating information for an area corresponding to the target image.
As can be seen from fig. 2, the flow 200 of the method for generating information in the present embodiment represents a step of determining the finally generated traffic indicator change information according to the statistical result of the traffic indicator change sub-information determined by the target number of expanded images. Therefore, the scheme described in the embodiment can determine the change of the traffic indicator through a plurality of associated images of the target image, so that the reliability of the change information of the traffic indicator is improved, and the accuracy of the high-precision map is guaranteed.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of a method for generating information according to an embodiment of the present application. In the application scenario of fig. 3, an autonomous vehicle 301 may capture an object image 302 with an on-board camera while traveling. The autonomous vehicle 301 may then upload the target image 302 to the backend server 303. The background server 303 may then segment the traffic light image 304 from the target image 302 using an image segmentation method. The backend server 303 may also obtain camera pose information 305 for the onboard camera from the autonomous vehicle 301. Then, the backend server 303 may project high-precision map data corresponding to the shooting position of the target image 302 onto the plane where the target image 302 is located, based on the coordinate conversion matrix indicated by the acquired camera posture information 305, and generate a projection image 306. Finally, the backend server may compare the traffic light image 304 and the projected image 306 to generate traffic indicator change information 307. The traffic indicator change information 307 may be used to indicate that the traffic indicator is increased.
At present, one of the prior art generally performs background fusion on the collected point cloud and the image, and constructs global information of a high-precision map by using each local information, which causes the problems of long collection period, long drawing period, high manufacturing cost and the like. In the method provided by the embodiment of the application, the high-precision map data matched with the target image is projected to the plane where the target image is located and compared, so that the traffic indicator change information for indicating whether the traffic indicator is changed is generated, the purpose of quickly and timely judging whether the traffic indicator presented by the high-precision map data is actually changed in a low-cost manner is achieved, and the method has good generalization performance. And further, a solid data base can be provided for the automatic updating of the created minute-level high-precision map.
With further reference to fig. 4, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for generating information, which corresponds to the method embodiment shown in fig. 1, and which is particularly applicable in various electronic devices.
As shown in fig. 4, the apparatus 400 for generating information provided by the present embodiment includes a segmentation unit 401, a first acquisition unit 402, a projection unit 403, and a generation unit 404. Wherein, the dividing unit 401 is configured to divide the traffic indicator image from the target image; a first acquiring unit 402 configured to acquire camera pose information corresponding to a target image; a projection unit 403 configured to project high-precision map data matched with the target image to a plane where the target image is located according to the camera posture information, and generate a projection image, wherein the matched high-precision map data includes three-dimensional data of a traffic indicator whose position and orientation meet preset requirements; a generating unit 404 configured to generate traffic indicator change information based on the comparison of the traffic indicator image and the projection image, wherein the traffic indicator change information is used for indicating whether the traffic indicator corresponding to the matched high-precision map data is changed, and the traffic indicator change information is used for indicating at least one of the following: the traffic indicator is increased, the traffic indicator is decreased, and the traffic indicator is not changed.
In the present embodiment, in the apparatus for generating information 400: specific processing of the dividing unit 401, the first obtaining unit 402, the projecting unit 403 and the generating unit 404 and technical effects thereof can refer to related descriptions of steps S101, S102, S103 and S104 in the corresponding embodiment of fig. 1, respectively, and are not described herein again.
In some optional implementations of the present embodiment, the projection unit 403 may include a first obtaining module (not shown in the figure), a selecting module (not shown in the figure), and a projection module (not shown in the figure). The first obtaining module may be configured to obtain shooting direction and position information corresponding to the target image; the selecting module may be configured to select, from preset high-precision map data, high-precision map data matched with the shooting direction and the position information as a candidate data set by using a pre-constructed high-dimensional index tree data structure; the projection module may be configured to project the candidate data set to a plane where the target image is located according to the camera pose information to generate a projection image.
In some optional implementations of the embodiment, the generating unit 404 may include a first comparing module (not shown in the figure), a second obtaining module (not shown in the figure), a dividing module (not shown in the figure), a second comparing module (not shown in the figure), and a first generating module (not shown in the figure). The first comparison module may be configured to generate traffic indicator alteration sub-information according to a comparison between the traffic indicator image and the projection image. The second acquiring module may be configured to acquire a target number of extended images associated with the target image. The high-precision map data matched with the extended image can be consistent with the high-precision map data matched with the target image. The segmentation module may be configured to segment the traffic indicator image from the target number of extended images. The second comparison module may be configured to generate the traffic indicator change sub-information of the number of targets according to a comparison between the traffic indicator image and the projection image divided from the number of target extension images. The first generation module may be configured to generate the traffic indicator change information by counting the generated plurality of pieces of traffic indicator change sub-information.
In some optional implementations of the present embodiment, the generating unit 404 may include a classifying module (not shown in the figure), a second generating module (not shown in the figure), and a third generating module (not shown in the figure). The classification module may be configured to, in response to determining that the traffic indicator image does not exist in the projection image, input the traffic indicator image to a pre-trained traffic indicator fine classification model, and generate class information to which the traffic indicator belongs. The category information can be used for indicating whether the traffic indicator is displayed in the high-precision map. The second generation module may be configured to generate traffic indicator alteration information indicating an increase in traffic indicators in response to determining that the generated category information indicates that the traffic indicators are displayed in the high-precision map. The third generation module may be configured to generate traffic indicator change information indicating that the traffic indicator is not changed in response to determining that the generated category information indicates that the traffic indicator is not displayed in the high-precision map.
In some optional implementations of this embodiment, the apparatus 400 for generating information may further include: a second acquiring unit (not shown), and a changing unit (not shown). Wherein the above-described second acquisition unit may be configured to acquire the supplementary data associated with the matched high-precision map data in response to generation of traffic indicator change information indicating an increase in traffic indicators. The supplementary data may include, among other things, high-precision map data that matches the location of the target image. The above-mentioned changing unit may be configured to change the generated traffic indicator change information indicating an increase in the traffic indicator to the traffic indicator change information indicating no change in the traffic indicator in response to determining that there is three-dimensional data of the traffic indicator in the supplementary data that matches the traffic indicator image indicating an increase in the traffic indicator in the target image.
The device provided by the above embodiment of the present application divides the traffic indicator image from the target image by the dividing unit 401. Then, the first acquiring unit 402 acquires camera pose information corresponding to the target image. Then, the projection unit 403 projects the high-precision map data matched with the target image onto the plane where the target image is located according to the camera attitude information, and generates a projection image. And the matched high-precision map data comprises three-dimensional data of the traffic indicator with the position and the orientation meeting preset requirements. The generation unit 404 generates traffic indicator change information based on the comparison between the traffic indicator image and the projection image. The traffic indicator change information is used for indicating whether the traffic indicator corresponding to the matched high-precision map data is changed or not. Wherein the traffic indicator change information is for indicating at least one of: the traffic indicator is increased, the traffic indicator is decreased, and the traffic indicator is not changed. Therefore, whether the traffic indicators (such as traffic lights and the like) presented by the high-precision map data are changed in reality or not can be judged quickly and timely in a low-cost mode, and the method has good generalization. And further, a solid data base can be provided for the automatic updating of the created minute-level high-precision map.
Referring now to fig. 5, the present application further provides an electronic device and a readable storage medium according to embodiments of the present application.
As shown in fig. 5, is a block diagram of an electronic device for generating information according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as an automatic control system for an autonomous vehicle, personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 5, the electronic apparatus includes: one or more processors 501, memory 502, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 5, one processor 501 is taken as an example.
Memory 502 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the method for generating information provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method for generating information provided herein.
The memory 502, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the method for generating information in the embodiment of the present application (for example, the segmentation unit 401, the first acquisition unit 402, the projection unit 403, and the generation unit 404 shown in fig. 4). The processor 501 executes various functional applications of the server and data processing, i.e., implements the method for generating information in the above-described method embodiments, by executing non-transitory software programs, instructions, and modules stored in the memory 502.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device for generating information, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 502 optionally includes memory located remotely from processor 501, which may be connected to an electronic device for generating information over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method for generating information may further include: an input device 503 and an output device 504. The processor 501, the memory 502, the input device 503 and the output device 504 may be connected by a bus or other means, and fig. 5 illustrates the connection by a bus as an example.
The input device 503 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic apparatus used to generate the information, such as an input device such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 504 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the traffic indicator change information used for indicating whether the traffic indicator is changed or not can be generated. Therefore, whether the traffic indicators (such as traffic lights) presented by the high-precision map data are changed in reality or not can be judged quickly and timely in a low-cost mode, and the method has good generalization. And further, a solid data base can be provided for the automatic updating of the created minute-level high-precision map.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (12)

1. A method for generating information, comprising:
segmenting a traffic indicator image from the target image;
acquiring camera attitude information corresponding to the target image;
according to the camera attitude information, projecting high-precision map data matched with the target image to a plane where the target image is located to generate a projected image, wherein the matched high-precision map data comprises three-dimensional data of a traffic indicator with the position and the orientation meeting preset requirements;
generating traffic indicator change information based on the comparison between the traffic indicator image and the projected image, wherein the traffic indicator change information is used for indicating whether the traffic indicator corresponding to the matched high-precision map data is changed or not, and the traffic indicator change information is used for indicating at least one of the following items: the traffic indicator is increased, the traffic indicator is decreased, and the traffic indicator is not changed.
2. The method of claim 1, wherein the projecting the high-precision map data matched with the target image to a plane where the target image is located according to the camera pose information to generate a projected image comprises:
acquiring shooting direction and position information corresponding to the target image;
selecting high-precision map data matched with the shooting direction and the position information from preset high-precision map data as a candidate data set by utilizing a pre-constructed high-dimensional index tree data structure;
and according to the camera attitude information, projecting the candidate data set to a plane where the target image is located to generate the projection image.
3. The method of claim 1, wherein the generating traffic indicator alteration information based on the comparison of the traffic indicator image and the projected image comprises:
generating traffic indicator alteration sub-information according to the comparison between the traffic indicator image and the projection image;
acquiring a target number of extended images related to the target image, wherein the high-precision map data matched with the extended images are consistent with the high-precision map data matched with the target image;
segmenting the traffic indicator image from the target number of expanded images;
generating traffic indicator change sub-information of the number of the targets according to the comparison between the traffic indicator images divided from the number of the targets in the extended images and the projected images;
and generating the traffic indicator change information by counting the generated plurality of traffic indicator change sub-information.
4. The method of claim 1, wherein the generating traffic indicator alteration information based on the comparison of the traffic indicator image and the projected image comprises:
in response to determining that the traffic indicator image does not exist in the projection image, inputting the traffic indicator image to a pre-trained traffic indicator fine classification model, and generating class information to which the traffic indicator belongs, wherein the class information is used for indicating whether the traffic indicator is displayed in a high-precision map or not;
generating traffic indicator change information indicating an increase in traffic indicators in response to determining that the generated category information indicates that the traffic indicators are displayed in the high-precision map;
in response to determining that the generated category information is for indicating that the traffic indicator is not displayed in the high-precision map, generating traffic indicator alteration information for indicating that the traffic indicator is not altered.
5. The method according to one of claims 1-4, wherein the method further comprises:
in response to generating traffic indicator change information indicating an increase in traffic indicators, acquiring supplementary data associated with the matched high-precision map data, wherein the supplementary data includes high-precision map data matched with the position of the target image;
in response to determining that there is three-dimensional data of a traffic indicator in the supplemental data that matches the traffic indicator image in the target image that indicates an increase in traffic indicator, altering the generated traffic indicator alteration information that indicates an increase in traffic indicator to traffic indicator alteration information that indicates no alteration in traffic indicator.
6. An apparatus for generating information, comprising:
a segmentation unit configured to segment a traffic indicator image from the target image;
a first acquisition unit configured to acquire camera pose information corresponding to the target image;
the projection unit is configured to project high-precision map data matched with the target image to a plane where the target image is located according to the camera posture information to generate a projection image, wherein the matched high-precision map data comprises three-dimensional data of a traffic indicator with the position and the orientation meeting preset requirements;
a generating unit configured to generate traffic indicator change information based on a comparison between the traffic indicator image and the projection image, wherein the traffic indicator change information indicates whether a traffic indicator corresponding to the matched high-precision map data is changed, wherein the traffic indicator change information indicates at least one of: the traffic indicator is increased, the traffic indicator is decreased, and the traffic indicator is not changed.
7. The apparatus of claim 6, the projection unit comprising:
the first acquisition module is configured to acquire shooting direction and position information corresponding to the target image;
the selection module is configured to select high-precision map data matched with the shooting direction and the position information from preset high-precision map data as a candidate data set by utilizing a pre-constructed high-dimensional index tree data structure;
and the projection module is configured to project the candidate data set to a plane where the target image is located according to the camera attitude information to generate the projection image.
8. The apparatus of claim 6, the generating unit comprising:
a first comparison module configured to generate traffic indicator alteration sub-information according to a comparison of the traffic indicator image and the projected image;
the second acquisition module is configured to acquire a target number of extended images associated with the target image, wherein the high-precision map data matched with the extended images is consistent with the high-precision map data matched with the target image;
a segmentation module configured to segment a traffic indicator image from the target number of expanded images;
the second comparison module is configured to generate target number of traffic indicator alteration sub-information according to comparison between the traffic indicator images and the projected images which are divided from the target number of expansion images;
a first generation module configured to generate the traffic indicator change information by counting the generated plurality of traffic indicator change sub-information.
9. The apparatus of claim 6, the generating unit comprising:
a classification module configured to input the traffic indicator image to a pre-trained traffic indicator fine classification model in response to determining that the traffic indicator image does not exist in the projection image, and generate class information to which the traffic indicator belongs, wherein the class information is used for indicating whether the traffic indicator is displayed in a high-precision map;
a second generation module configured to generate traffic indicator alteration information indicating an increase in traffic indicators in response to determining that the generated category information indicates that the traffic indicators are displayed in the high-precision map;
a third generating module configured to generate traffic indicator alteration information indicating that the traffic indicator is not altered in response to determining that the generated category information indicates that the traffic indicator is not displayed in the high-precision map.
10. The apparatus according to one of claims 6-9, the apparatus further comprising:
a second acquisition unit configured to acquire supplementary data associated with the matched high-precision map data in response to generation of traffic indicator change information indicating an increase in traffic indicators, wherein the supplementary data includes high-precision map data matched with a position of the target image;
a changing unit configured to change the generated traffic indicator change information indicating an increase in the traffic indicator to traffic indicator change information indicating no change in the traffic indicator in response to determining that there is three-dimensional data of the traffic indicator in the supplementary data that matches the traffic indicator image indicating an increase in the traffic indicator in the target image.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
CN202010411190.1A 2020-05-15 2020-05-15 Method, apparatus, device and storage medium for generating information Active CN111597986B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010411190.1A CN111597986B (en) 2020-05-15 2020-05-15 Method, apparatus, device and storage medium for generating information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010411190.1A CN111597986B (en) 2020-05-15 2020-05-15 Method, apparatus, device and storage medium for generating information

Publications (2)

Publication Number Publication Date
CN111597986A true CN111597986A (en) 2020-08-28
CN111597986B CN111597986B (en) 2023-09-29

Family

ID=72183717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010411190.1A Active CN111597986B (en) 2020-05-15 2020-05-15 Method, apparatus, device and storage medium for generating information

Country Status (1)

Country Link
CN (1) CN111597986B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861832A (en) * 2021-04-25 2021-05-28 湖北亿咖通科技有限公司 Traffic identification detection method and device, electronic equipment and storage medium
CN113514053A (en) * 2021-07-13 2021-10-19 阿波罗智能技术(北京)有限公司 Method and device for generating sample image pair and method for updating high-precision map
CN113706704A (en) * 2021-09-03 2021-11-26 北京百度网讯科技有限公司 Method and equipment for planning route based on high-precision map and automatic driving vehicle

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110182475A1 (en) * 2010-01-22 2011-07-28 Google Inc. Traffic signal mapping and detection
CN102447886A (en) * 2010-09-14 2012-05-09 微软公司 Visualizing video within existing still images
US20170003134A1 (en) * 2015-06-30 2017-01-05 Lg Electronics Inc. Advanced Driver Assistance Apparatus, Display Apparatus For Vehicle And Vehicle
US20180143648A1 (en) * 2016-11-24 2018-05-24 Lg Electronics Inc. Vehicle control device mounted on vehicle and method for controlling the vehicle
CN109271924A (en) * 2018-09-14 2019-01-25 盯盯拍(深圳)云技术有限公司 Image processing method and image processing apparatus
CN109579856A (en) * 2018-10-31 2019-04-05 百度在线网络技术(北京)有限公司 Accurately drawing generating method, device, equipment and computer readable storage medium
CN109597862A (en) * 2018-10-31 2019-04-09 百度在线网络技术(北京)有限公司 Ground drawing generating method, device and computer readable storage medium based on puzzle type
CN110147382A (en) * 2019-05-28 2019-08-20 北京百度网讯科技有限公司 Lane line update method, device, equipment, system and readable storage medium storing program for executing
US20190329783A1 (en) * 2017-01-12 2019-10-31 Mobileye Vision Technologies Ltd. Navigation at alternating merge zones
US20200130686A1 (en) * 2018-10-26 2020-04-30 Hyundai Motor Company Method for controlling deceleration of environmentally friendly vehicle

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110182475A1 (en) * 2010-01-22 2011-07-28 Google Inc. Traffic signal mapping and detection
CN102792316A (en) * 2010-01-22 2012-11-21 谷歌公司 Traffic signal mapping and detection
CN102447886A (en) * 2010-09-14 2012-05-09 微软公司 Visualizing video within existing still images
US20170003134A1 (en) * 2015-06-30 2017-01-05 Lg Electronics Inc. Advanced Driver Assistance Apparatus, Display Apparatus For Vehicle And Vehicle
US20180143648A1 (en) * 2016-11-24 2018-05-24 Lg Electronics Inc. Vehicle control device mounted on vehicle and method for controlling the vehicle
US20190329783A1 (en) * 2017-01-12 2019-10-31 Mobileye Vision Technologies Ltd. Navigation at alternating merge zones
CN109271924A (en) * 2018-09-14 2019-01-25 盯盯拍(深圳)云技术有限公司 Image processing method and image processing apparatus
US20200130686A1 (en) * 2018-10-26 2020-04-30 Hyundai Motor Company Method for controlling deceleration of environmentally friendly vehicle
CN109579856A (en) * 2018-10-31 2019-04-05 百度在线网络技术(北京)有限公司 Accurately drawing generating method, device, equipment and computer readable storage medium
CN109597862A (en) * 2018-10-31 2019-04-09 百度在线网络技术(北京)有限公司 Ground drawing generating method, device and computer readable storage medium based on puzzle type
CN110147382A (en) * 2019-05-28 2019-08-20 北京百度网讯科技有限公司 Lane line update method, device, equipment, system and readable storage medium storing program for executing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张蕊: "基于激光点云的复杂三维场景多态目标语义分割技术研究", pages 135 - 19 *
雷震: "全卷积神经网络与卡尔曼滤波融合车道线跟踪控制技术", 中国优秀硕士学位论文全文数据库 (基础科学辑), pages 035 - 288 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861832A (en) * 2021-04-25 2021-05-28 湖北亿咖通科技有限公司 Traffic identification detection method and device, electronic equipment and storage medium
CN113514053A (en) * 2021-07-13 2021-10-19 阿波罗智能技术(北京)有限公司 Method and device for generating sample image pair and method for updating high-precision map
CN113514053B (en) * 2021-07-13 2024-03-26 阿波罗智能技术(北京)有限公司 Method and device for generating sample image pair and method for updating high-precision map
CN113706704A (en) * 2021-09-03 2021-11-26 北京百度网讯科技有限公司 Method and equipment for planning route based on high-precision map and automatic driving vehicle

Also Published As

Publication number Publication date
CN111597986B (en) 2023-09-29

Similar Documents

Publication Publication Date Title
CN111797187B (en) Map data updating method and device, electronic equipment and storage medium
CN110979346B (en) Method, device and equipment for determining lane where vehicle is located
CN111695488B (en) Method, device, equipment and storage medium for identifying interest surface
CN111998860B (en) Automatic driving positioning data verification method and device, electronic equipment and storage medium
CN111859778B (en) Parking model generation method and device, electronic device and storage medium
KR20220113829A (en) Vehicle tracking methods, devices and electronic devices
CN111597986B (en) Method, apparatus, device and storage medium for generating information
JP7204823B2 (en) VEHICLE CONTROL METHOD, VEHICLE CONTROL DEVICE, AND VEHICLE
CN111292531B (en) Tracking method, device and equipment of traffic signal lamp and storage medium
CN111597987B (en) Method, apparatus, device and storage medium for generating information
CN111950537B (en) Zebra crossing information acquisition method, map updating method, device and system
CN113091757B (en) Map generation method and device
CN112147632A (en) Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
WO2023231991A1 (en) Traffic signal lamp sensing method and apparatus, and device and storage medium
CN111767360A (en) Method and device for marking virtual lane at intersection
CN111523515A (en) Method and device for evaluating environment cognitive ability of automatic driving vehicle and storage medium
CN112581533B (en) Positioning method, positioning device, electronic equipment and storage medium
CN112101527B (en) Method and device for identifying lane change, electronic equipment and storage medium
CN111324616B (en) Method, device and equipment for detecting lane change information
CN113673281A (en) Speed limit information determining method, device, equipment and storage medium
EP3985637A2 (en) Method and apparatus for outputting vehicle flow direction, roadside device, and cloud control platform
CN113011298A (en) Truncated object sample generation method, target detection method, road side equipment and cloud control platform
CN110751853B (en) Parking space data validity identification method and device
CN113297878A (en) Road intersection identification method and device, computer equipment and storage medium
CN111260722A (en) Vehicle positioning method, apparatus and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant