CN114037966A - High-precision map feature extraction method, device, medium and electronic equipment - Google Patents
High-precision map feature extraction method, device, medium and electronic equipment Download PDFInfo
- Publication number
- CN114037966A CN114037966A CN202111272599.0A CN202111272599A CN114037966A CN 114037966 A CN114037966 A CN 114037966A CN 202111272599 A CN202111272599 A CN 202111272599A CN 114037966 A CN114037966 A CN 114037966A
- Authority
- CN
- China
- Prior art keywords
- road
- features
- determining
- original geometric
- frames
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 56
- 239000003550 marker Substances 0.000 claims abstract description 44
- 238000007499 fusion processing Methods 0.000 claims abstract description 23
- 238000000034 method Methods 0.000 claims description 30
- 230000004927 fusion Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 9
- 238000013473 artificial intelligence Methods 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 15
- 238000012545 processing Methods 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 230000011218 segmentation Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000013475 authorization Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003924 mental process Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a high-precision map feature extraction method, device, medium and electronic equipment, and relates to the field of artificial intelligence, in particular to the field of computer vision, and specifically relates to the fields of automatic driving, high-precision maps and intelligent transportation. The specific implementation scheme is as follows: acquiring at least two frames of road images acquired in different lanes in the same road area; respectively extracting the original geometric features of the pavement markers in the at least two frames of road images; and carrying out fusion processing on at least two original geometric characteristics to obtain the target geometric characteristics of the pavement marker. By executing the technical scheme provided by the application, the geometric features of the pavement markers are extracted, and more accurate geometric features can be extracted.
Description
Technical Field
The application relates to the field of artificial intelligence, in particular to the field of computer vision, and specifically relates to the fields of automatic driving, high-precision maps and intelligent transportation.
Background
The high-precision map is also called as a high-precision map and is used for an automatic driving automobile. The high-precision map has accurate vehicle position information and abundant road element data information, can help an automobile to predict road surface complex information such as gradient, curvature, course and the like, and can better avoid potential risks. The road surface identification exists in urban roads in a large number, and is often extracted based on road images acquired by a camera in the processes of automatic vehicle driving, high-precision map production, intelligent traffic management and the like. However, due to the influence of the collection view angle, the pavement markers in the road image often have certain loss and distortion, and the accuracy of extracting the pavement marker features is seriously influenced.
Disclosure of Invention
The application discloses a high-precision map feature extraction method, device, medium and electronic equipment for extracting geometric features of pavement markers, so as to extract more accurate geometric features.
According to an aspect of the present application, there is provided a high-precision map feature extraction method, including:
acquiring at least two frames of road images acquired in different lanes in the same road area;
respectively extracting the original geometric features of the pavement markers in the at least two frames of road images;
and carrying out fusion processing on at least two original geometric characteristics to obtain the target geometric characteristics of the pavement marker.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a high-precision map feature extraction method according to any one of the embodiments of the present application.
According to an aspect of the present application, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the high-precision map feature extraction method according to any one of the embodiments of the present application.
By executing the technical scheme provided by the application, the geometric features of the pavement markers are extracted, and more accurate geometric features can be extracted.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic diagram of a high-precision map feature extraction method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of another high-precision map feature extraction method according to an embodiment of the present application;
FIG. 3A is a schematic diagram of another high-precision map feature extraction method according to an embodiment of the present application;
fig. 3B is a schematic flowchart of another high-precision map feature extraction method provided in the embodiment of the present application;
FIG. 4 is a schematic diagram of another high-precision map feature extraction method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another high-precision map feature extraction method according to an embodiment of the present application;
fig. 6 is a schematic diagram of a high-precision map feature extraction device according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device for implementing a high-precision map feature extraction method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram of a high-precision map feature extraction method according to an embodiment of the present application. The embodiment can be applied to the condition of extracting the geometric features of the road surface marks. The high-precision map feature extraction method disclosed in this embodiment may be executed by a high-precision map feature extraction device, which may be implemented by software and/or hardware and configured in an electronic device with computing and storing functions. Referring to fig. 1, the high-precision map feature extraction method provided in this embodiment includes:
s110, acquiring at least two frames of road images acquired in different lanes for the same road area.
The road area refers to an area on a road where a ground mark is drawn, and may be, for example, a road intersection where a turning guide, a stop line, or a pedestrian crossing is drawn.
The road image can be acquired by a professional image acquisition vehicle, or can be acquired by a common vehicle through a vehicle data recorder or a handheld device with a photographing function, such as a mobile phone and a tablet personal computer, wherein the acquisition mode and the acquisition device of the road image are not limited and are determined according to actual conditions. The road image may be a front view of the road region.
Optionally, the at least two frames of road images may be acquired by the acquisition device in different lanes, and at least one frame of road image is acquired for each lane. For example, the vehicle with the collection device may be controlled to respectively run on each lane, and during the running process of the vehicle on each lane, the collection device may be controlled to collect the road image according to a preset frequency (for example, 10 frames of images per second), that is, collect multiple frames of images for each lane.
The image acquisition positions corresponding to different road images are different in the embodiment. It can be known that, because the collection position can influence the camera visual angle, shoot the same road area at different collection positions, can obtain the road area under different camera visual angles, gather the road area from many angles.
And S120, respectively extracting the original geometric features of the pavement markers in the at least two frames of road images.
The road surface mark refers to a graph drawn on a road by a relevant department and used for guiding pedestrians or vehicles to travel. The pavement marker may be, for example, a diamond-shaped marker, a lane line, a crosswalk, a stop line, or the like. The original geometric features of the pavement marker are information describing polygons that constitute the pavement marker. For example, in the case that the road surface is identified as a crosswalk, the original geometric features of the crosswalk may be feature data such as corner points or edges of polygons forming the crosswalk.
In an alternative embodiment, the road surface marking is a lane-crossing road surface marking. The cross-vehicle road surface mark refers to a road surface mark crossing at least one lane simultaneously. For example, the road surface across the lane is identified as a crosswalk, a stop line, or the like. Due to the problem of the visual angle of the camera, when the camera is used for shooting images for the lane-crossing road sign, the problem that the road sign is incomplete, and the lane-crossing road sign is easy to deform geometrically in an area far away from the camera exists, and the method is accurate and effective for extracting the geometric characteristics of the lane-crossing road sign.
The method comprises the steps of respectively extracting original geometric features of road surface marks in road images collected under each lane, specifically, performing semantic segmentation on each road image, performing full-element segmentation on the road images by using a semantic segmentation network, and segmenting elements such as signs, vegetations, roads, pedestrians, vehicles and the like in the road images. Then, the ground elements are extracted, and semantic segmentation is further performed on the ground elements to segment elements such as lane lines, ground arrows, pedestrian crossings, stop lines and ground speed limits. And then, selecting the road surface identification of the cross lane from the divided ground elements, and processing the road surface identification of the cross lane by using a connected domain analysis method, a clustering algorithm and an outer contour analysis method to determine the original geometric characteristics of the road surface identification. Optionally, in this embodiment, an original geometric feature of a set of road surface markers is determined for at least one frame of road image collected by each lane; the original geometric characteristics of a group of pavement markers can be determined for each frame of road image, and the method is limited.
Preferably, since the road surface marker is located on the ground, in the embodiment, when extracting the original geometric features of the road surface marker from the road image, at least two frames of the acquired road image may be converted into a top view, and then the original geometric features of the road surface marker are extracted with respect to the top view, so as to improve the accuracy of extracting the features.
Another possible implementation may be to separately input at least two frames of road images into a pre-trained feature recognition model, and extract the original geometric features of the road surface markers in each frame of road image through the feature recognition model.
S130, performing fusion processing on at least two original geometric characteristics to obtain the target geometric characteristics of the pavement marker.
The original geometric features refer to the geometric features extracted from a single road image without processing. Due to the problem of the visual angle of the camera, when the camera is used for shooting images of the road surface mark crossing the lane, the road surface mark crossing the lane is easy to generate geometric deformation in an area far away from the camera. Thus, the original geometry may not truly reflect the geometry of the pavement marking. The target geometric features are feature fusion results obtained by performing fusion processing on at least two original geometric features. Since the original geometric features are extracted from a single road image, the extracted geometric features are incomplete. The target geometric features are fused with the original geometric features shot by at least two different lanes, and compared with the single original geometric features, the target geometric features are more accurate and more complete.
And carrying out fusion processing on at least two original geometric characteristics to obtain the target geometric characteristics of the pavement marker. Specifically, at least two original geometric features may be superimposed first, and then the superimposed geometric features are corrected in combination with the standard geometric features of the pavement markers to obtain target geometric features; or inputting at least two original geometric features into a pre-constructed feature fusion model, performing fusion processing on the original geometric features through the feature fusion model, and outputting target geometric features; and performing feature fusion based on the region intersection relationship between the original geometric features to obtain the target geometric features.
According to the technical scheme of the embodiment of the application, at least two frames of road images collected in the same road area in different lanes are obtained; respectively extracting the original geometric characteristics of the pavement markers in the road image collected under each lane; and carrying out fusion processing on at least two original geometric characteristics to obtain the target geometric characteristics of the pavement marker. The method and the device have the advantages that at least two original geometric features are fused, the geometric features acquired from different camera viewing angles are effectively integrated, more accurate target geometric features including more pavement markers are obtained, and the problem that the geometric features of the pavement markers extracted from the road images are inaccurate due to incomplete and distorted geometric features of the pavement markers in the road images can be effectively solved by executing the technical scheme provided by the application.
FIG. 2 is a schematic diagram of another high-precision map feature extraction method according to an embodiment of the present application; the present embodiment is an alternative proposed on the basis of the above-described embodiments. Specifically, the refinement of the operation "performing fusion processing on at least two original geometric features to obtain the target geometric features of the road surface marker" is performed.
Referring to fig. 2, the high-precision map feature extraction method provided by the embodiment includes:
s210, acquiring at least two frames of road images acquired in different lanes for the same road area.
And S220, respectively extracting the original geometric features of the road surface marks in the at least two frames of road images.
Specifically, at least one set of original geometric features of the road surface mark is extracted from each frame of road image.
And S230, determining an intersection region and a non-intersection region of at least two original geometric features.
The intersection region of the original geometric features refers to an image region where the original geometric features describing the same part of the pavement marker in different road images are located. In contrast, the image region where other original geometric features in the road image are located is a non-intersection region of the original geometric features.
Illustratively, where the road surface is identified as a crosswalk, the extracted original geometric features are polygonal contours that make up the crosswalk. Because the pedestrian crossing is a road surface mark crossing lanes, part of the pedestrian crossing in the road image may be lost due to the problem of the acquisition position, and the missing parts of the pedestrian crossing in different road images may have differences. The overlapped part of the pedestrian crossing in the road image, namely the pedestrian crossing included in each road image, is the intersection area of the original geometric features.
S240, extracting candidate local features of the at least two original geometric features in the intersection region, and determining a first local feature of the intersection region according to the confidence of the candidate local features in the intersection region.
The candidate local features refer to parts of original geometric features located in the intersection region. The confidence level refers to the accuracy of the original geometric features in the intersection region, and generally speaking, a region with small deformation of the pavement marker in the road image has a higher confidence level than other regions. The confidence degree can be obtained through network model analysis, can be determined through comparison analysis of candidate local features based on standard geometric features, and can also be determined according to the distance between the road surface identifier and the acquisition position. The determination method of the confidence level is not limited here, and is specifically determined according to the actual situation.
Wherein the first local feature refers to a candidate local feature for generating the target geometric feature.
The candidate local features refer to original geometric features located in intersection areas in different road images, and due to the fact that the deformation degrees of the candidate local features are different, confidence degrees corresponding to the candidate local features are different.
And extracting candidate local features of the at least two original geometric features in the intersection region, and determining the first local feature of the intersection region according to the confidence of the candidate local features in the intersection region. Specifically, candidate local features of at least two original geometric features in an intersection region are extracted, confidence degrees corresponding to the candidate local features are calculated, and the candidate local feature corresponding to the maximum confidence degree is selected as the first local feature. Therefore, under the condition that the geometric features of the pavement markers are redundant, the candidate local features can be screened according to the confidence coefficient, the relatively accurate geometric features are selected, and the accuracy of the geometric features is guaranteed.
In an alternative embodiment, the confidence of the candidate local feature in the intersection region is determined according to the distance value between the position of the intersection region and the acquisition position of the candidate local feature.
Wherein, the acquisition position of the candidate local feature may be an acquisition position of the road image. It is known that, in the process of capturing an image with a camera, deformation occurs more severely in a region farther from the capturing position due to a problem of the angle of view of the camera, especially for the ground marking across the lane.
The road image is an image acquired in the same road area at different acquisition positions, the intersection area is in different road images, the acquisition positions of the candidate local features are different, and the confidence of the candidate local features in the intersection area is determined according to the distance value between the candidate local features and the intersection area. Specifically, in a world coordinate system, the euclidean distance between the center position of the intersection region and the acquisition position of the road image may be calculated, and the confidence of the candidate local features in the sub-region may be determined. Generally, distance values are inversely related to confidence, i.e., the greater the distance value, the less confidence.
According to the method and the device, the confidence of the candidate local features in the intersection region is determined according to the distance value between the intersection region position and the collection position of the candidate local features, and the extraction accuracy rate of the geometric features corresponding to the pavement markers is improved.
And S250, extracting second local features of the at least two original geometric features in the non-intersection area.
The original geometric features in the non-intersection area respectively describe the geometric characteristics of different parts of the pavement marker, and in order to ensure the integrity of pavement marker information, the original geometric features in the non-intersection area are respectively extracted to serve as second local features. The second local feature refers to an original geometric feature located in a non-intersection region, and the second local feature is used for generating the target geometric feature together with the first local feature.
And S260, determining the target geometric characteristics of the road surface marks according to the first local characteristics and the second local characteristics.
The first local feature is generated in the intersection region of the original geometric features, and the second local feature is generated in the non-intersection region of the original geometric features. And determining the target geometric characteristics of the pavement marker according to the first local characteristics and the second local characteristics, and specifically splicing the first local characteristics and the second local characteristics to obtain the target geometric characteristics.
According to the technical scheme, the intersection region and the non-intersection region of the original geometric features are distinguished, the redundant intersection region exists in the original geometric features, the candidate local features are screened according to the confidence coefficient, the relatively accurate first local features are selected, and the accuracy of the target geometric features is guaranteed. According to the method and the device, the second local features of the at least two original geometric features in the non-intersection area are extracted, the target geometric features of the pavement marker are determined according to the first local features and the second local features, and the integrity of pavement marker information is guaranteed.
FIG. 3A is a schematic diagram of another high-precision map feature extraction method according to an embodiment of the present application; the present embodiment is an alternative proposed on the basis of the above-described embodiments. Specifically, the operation "determine the first local feature of the intersection region according to the confidence of the candidate local feature in the intersection region" is refined.
Referring to fig. 3A, the high-precision map feature extraction method provided in this embodiment includes:
s310, acquiring at least two frames of road images acquired in different lanes for the same road area.
And S320, respectively extracting the original geometric features of the road surface marks in the at least two frames of road images.
S330, determining an intersection area and a non-intersection area of at least two original geometric features.
S340, extracting candidate local features of the at least two original geometric features in the intersection area, and dividing the intersection area into at least two sub-areas.
The intersection area is divided into at least two sub-areas, specifically, the vehicle running direction is taken as a reference direction, and the intersection area corresponding to the road surface marks perpendicular to the reference direction is divided.
Optionally, the intersection region is divided into at least two sub-regions by dividing a straight line parallel to the reference direction.
And S350, determining the first local feature of the intersection region according to the confidence of the candidate local features in each sub-region.
And different sub-regions correspond to different confidence degrees, the confidence degrees corresponding to the sub-regions are respectively calculated, and the confidence degrees of the candidate local features positioned in the sub-regions are determined. And screening the candidate local features positioned in the intersection area according to the confidence degrees of the candidate local features, and screening the first local features from the intersection area.
Specifically, the confidence degrees of the candidate local features in each sub-region are sorted according to a certain order, and the candidate local feature corresponding to the maximum confidence degree is selected as the first local feature.
In an alternative embodiment, the confidence level of the candidate local feature in the sub-region is determined according to the distance value between the position of the sub-region and the acquisition position of the candidate local feature. Wherein, the acquisition position of the candidate local feature may be an acquisition position of the road image. Specifically, the euclidean distance between the central position of the sub-region and the acquisition position of the road image can be calculated in a world coordinate system, and the confidence of the candidate local feature in the sub-region is determined. Generally, distance values are inversely related to confidence, i.e., the greater the distance value, the less confidence. According to the method and the device, the confidence coefficient of the candidate local features in the sub-regions is determined according to the distance value between the sub-region positions and the collection positions of the candidate local features, and the extraction accuracy rate of the geometric features corresponding to the pavement markers is improved.
And S360, extracting second local features of the at least two original geometric features in the non-intersection area.
And S370, determining the target geometric characteristics of the road surface marks according to the first local characteristics and the second local characteristics.
According to the technical scheme, the intersection area is divided into at least two sub-areas, and the first local feature of the intersection area is determined according to the confidence degree of the candidate local features in each sub-area. The fine-grained calculation of the confidence degrees corresponding to the candidate local features is realized, the candidate local features are screened according to the confidence degrees, the accuracy of the first local features is further improved, and therefore the extraction accuracy of the geometric features corresponding to the pavement markers is guaranteed.
For convenience of understanding, fig. 3B is a schematic flowchart of another high-precision map feature extraction method provided in the embodiment of the present application. As shown in fig. 3B, first, the original geometric feature a of the road surface marker in the road image (e.g., top view of the road image) a and the original geometric feature B of the road surface marker in the road image (e.g., top view of the road image) B are extracted. Then, an intersection area C and a non-intersection area D of the original geometric feature A and the original geometric feature B are determined1And D2. Next, the intersection region C is divided into at least two sub-regions, CiIs a subregion of the intersection region C, AiAnd BiThe original geometric characteristics A and B are respectively arranged in the sub-area CiThe candidate local features in (1). According to the candidate local feature AiAnd BiIn sub-region CiConfidence of (B) williThe first local feature of the intersection region C is determined. Extracting a non-intersection area D in the original geometric feature A1And extracting the non-intersection region D in the original geometric feature B2The second local feature of (1). According to the first local feature BiAnd a second local feature D1And D2And determining the target geometric characteristic E of the road surface mark.
FIG. 4 is a schematic diagram of another high-precision map feature extraction method according to an embodiment of the present application; the present embodiment is an alternative proposed on the basis of the above-described embodiments. Specifically, under the condition that the road area contains at least two road surface marks, the refinement of the operation of carrying out fusion processing on at least two original geometric features to obtain the target geometric features of the road surface marks is carried out.
Referring to fig. 4, the high-precision map feature extraction method provided by the embodiment includes:
s410, acquiring at least two frames of road images acquired in different lanes for the same road area.
And S420, respectively extracting the original geometric features of the road surface marks in the at least two frames of road images.
And S430, if the road area comprises at least two road surface marks, determining at least two original geometric characteristics related to the same road surface mark according to the intersection-parallel ratio relation among the original geometric characteristics.
The road area comprises at least two road surface marks, which means that at least two road surface marks of the same type exist in the same road area at the same time. For example, in the case where the road area is an intersection formed by two roads, the road area will usually include four crosswalks. When the road area includes at least two road surface markers, the original geometric features belonging to the same road surface marker need to be determined from the original geometric features corresponding to the road surface markers.
The Intersection-comparison relationship is used for describing the coincidence degree between the original geometric features, and the Intersection-comparison relationship between the original geometric features can be determined by using an Intersection-comparison policy (IOU) in a target detection task, and is not further developed here.
And determining at least two original geometric features associated with the same pavement marker according to the intersection-parallel ratio relation among the original geometric features. Specifically, according to the intersection-parallel relation among the original geometric features, the pavement markers corresponding to the original geometric features are determined, and the original geometric features are associated with the corresponding pavement markers. For example, at least two original geometric features with the intersection ratio larger than a preset threshold value are used as the original geometric features corresponding to the same road surface mark.
S440, performing fusion processing on at least two original geometric features associated with the same pavement marker to obtain target geometric features of the at least two pavement markers.
And carrying out fusion processing on the original geometric characteristics corresponding to the same pavement marker to obtain the target geometric characteristics corresponding to the pavement marker. Optionally, under the condition that the road region includes at least two road surface identifiers, the original geometric features associated with the road surface identifiers are respectively subjected to fusion processing, so as to obtain the target geometric features corresponding to the road surface identifiers.
According to the technical scheme of the embodiment of the application, under the condition that a road area comprises at least two road surface marks, at least two original geometric characteristics related to the same road surface mark are determined according to the intersection-parallel relation between the original geometric characteristics; and performing fusion processing on at least two original geometric characteristics associated with the same pavement marker to obtain target geometric characteristics of at least two pavement markers. According to the method and the device, the original geometric characteristics associated with the pavement markers are determined to be fused, so that the extraction accuracy of the geometric characteristics corresponding to the pavement markers is guaranteed.
FIG. 5 is a schematic diagram of another high-precision map feature extraction method according to an embodiment of the present application; the present embodiment is an alternative proposed on the basis of the above-described embodiments. Specifically, the operation of "extracting the original geometric features of the road surface markers in the at least two frames of road images respectively" is refined.
Referring to fig. 5, the high-precision map feature extraction method provided by the embodiment includes:
s510, acquiring at least two frames of road images acquired in different lanes for the same road area.
S520, determining a road top view of the at least two frames of road images.
Since the road image is generally a front view and the road sign is a graph drawn on the road surface, the geometric features of the road sign extracted from the road top view have higher accuracy than the geometric features extracted from the main graph.
Specifically, when determining the road top view, the road top view may be converted into the road top view based on perspective transformation and affine transformation principles, or may be generated by combining point cloud data of the road area and the front view of the road area. The present embodiment preferably combines point cloud data to generate a road image top view.
In an alternative embodiment, determining a top view of the road for the at least two frames of road images comprises: and determining a corresponding road top view of the road area under each lane according to at least two frames of road images and at least two frames of point cloud data acquired for the road area in each lane.
The point cloud data and the road image of the road area are acquired in the same road area through a laser radar and a camera which are configured on an image acquisition vehicle in different lanes.
Specifically, for point cloud data and road image data collected under each lane, semantic segmentation can be performed on each collected road image frame to determine a ground object area in the image, and ground point cloud detection can be performed on the point cloud data; then, projecting the detected multiple frames of ground point clouds into each frame of road image according to the calibration relation between the laser radar and the camera, and judging whether each projection point is a target projection point corresponding to the ground object area after semantic segmentation; and if so, selecting an optimal frame from the multi-frame ground object areas corresponding to the target projection points, fusing, and generating a road top view. I.e. for the data collected for each lane, a frame of top view of the road is determined. According to the embodiment of the application, the point cloud data of the road area is combined with the road image to generate the road top view, the position of the ground is determined based on the point cloud data, and the accuracy of feature extraction can be improved.
It should be noted that, when determining the top view of the road of at least two frames of road images, the embodiment may determine a top view of a frame of road image for each frame of image; or determining a frame of road image top view aiming at each frame of road image collected by the same lane. In order to ensure the accuracy of the top view, the second embodiment is preferred.
S530, extracting road surface identification areas from the at least two frames of road top views respectively, and performing feature extraction on the road surface identification areas to obtain original geometric features of the road surface identifications in the at least two frames of road images.
In general, the captured road image often includes, in addition to the road area, the surroundings of the road area, such as other elements like vehicles, pedestrians, and vegetation. In the embodiment of the application, the road surface identification area is used as an area of interest, and the area of interest of the road surface needs to be extracted from the road image. And extracting the characteristics of the pavement marking area, and extracting the characteristics of polygons forming the pavement marking to be used as the original geometric characteristics of the pavement marking in the road image.
And S540, carrying out fusion processing on at least two original geometric characteristics to obtain the target geometric characteristics of the pavement marker.
According to the technical scheme of the embodiment of the application, the road top views of at least two frames of road images are determined, the road surface identification areas are respectively extracted from the at least two frames of road top views, and the characteristics of the road surface identification areas are extracted to obtain the original geometric characteristics of the road surface identification in the at least two frames of road images; and carrying out fusion processing on at least two original geometric characteristics to obtain the target geometric characteristics of the pavement marker. The road sign marks are the graphs drawn on the road surface, and the geometric features of the road surface marks are extracted from the top view of the road, so that the extraction accuracy of the geometric features corresponding to the road surface marks is effectively improved.
Fig. 6 is a schematic diagram of a high-precision map feature extraction device according to an embodiment of the present application; referring to fig. 6, an embodiment of the present application discloses a high-precision map feature extraction apparatus 600, where the apparatus 600 may include: a road image acquisition module 610, an original geometric feature extraction module 620 and an original geometric feature fusion module 630.
A road image acquisition module 610, configured to acquire at least two frames of road images acquired in different lanes for the same road area;
an original geometric feature extraction module 620, configured to extract original geometric features of the road surface identifiers in the at least two frames of road images, respectively;
and an original geometric feature fusion module 630, configured to perform fusion processing on at least two original geometric features to obtain a target geometric feature of the road surface identifier.
According to the technical scheme of the embodiment of the application, at least two frames of road images collected in the same road area in different lanes are obtained; respectively extracting the original geometric characteristics of the pavement markers in the road image collected under each lane; and carrying out fusion processing on at least two original geometric characteristics to obtain the target geometric characteristics of the pavement marker. The method and the device have the advantages that at least two original geometric features are fused, the geometric features acquired from different camera viewing angles are effectively integrated, more accurate target geometric features including more pavement markers are obtained, and the problem that the geometric features of the pavement markers extracted from the road images are inaccurate due to incomplete and distorted geometric features of the pavement markers in the road images can be effectively solved by executing the technical scheme provided by the application.
Optionally, the original geometric feature fusion module 630 includes: an intersection region determining submodule for determining an intersection region and a non-intersection region of at least two of the original geometric features; a first local feature determining submodule, configured to extract candidate local features of the at least two original geometric features in the intersection region, and determine a first local feature of the intersection region according to a confidence of the candidate local features in the intersection region; a second local feature extraction submodule, configured to extract a second local feature of the at least two original geometric features in the non-intersection region; and the target geometric characteristic determining submodule is used for determining the target geometric characteristic of the pavement marker according to the first local characteristic and the second local characteristic.
Optionally, the first local feature determination sub-module includes: a sub-region dividing unit, configured to divide the intersection region into at least two sub-regions; and the first local feature determining unit is used for determining the first local feature of the intersection region according to the confidence degrees of the candidate local features in the sub-regions.
Optionally, the apparatus further comprises: and the confidence determining module is specifically configured to determine the confidence of the candidate local features in the intersection region or the sub-region according to the distance value between the intersection region position or the sub-region position and the acquisition position of the candidate local features.
Optionally, if the road region includes at least two road surface identifiers, the original geometric feature fusion module 630 includes an associated original geometric feature determination sub-module, configured to determine at least two original geometric features associated with the same road surface identifier according to a cross-over ratio relationship between the original geometric features; and the target geometric characteristic determining submodule is used for carrying out fusion processing on at least two original geometric characteristics associated with the same pavement marker to obtain the target geometric characteristics of the at least two pavement markers.
Optionally, the original geometric feature extraction module 620 includes: the road top view determining submodule is used for determining a road top view of the at least two frames of road images; and the road surface identification region feature extraction submodule is used for extracting the road surface identification regions from the at least two frames of road top views respectively and extracting the features of the road surface identification regions to obtain the original geometric features of the road surface identifications in the at least two frames of road images.
Optionally, the road top view determining submodule is specifically configured to determine a road top view corresponding to the road area under each lane according to at least two frames of road images and at least two frames of point cloud data acquired from each lane for the road area.
Optionally, the road surface sign is a lane-crossing road surface sign.
The high-precision map feature extraction device provided by the embodiment of the application can execute the high-precision map feature extraction method provided by any embodiment of the application, and has corresponding functional modules and beneficial effects for executing the high-precision map feature extraction method.
In the technical scheme of the application, the acquisition, storage, application and the like of the related data (such as the authorization code of the application, the application identification, the authorization parameter of the application and the like), the related data (such as the historical access record) of the open platform, the related data of the third-party organization (such as the target organization and other organizations and the like) and the like all accord with the regulations of related laws and regulations, and do not violate the customs of the public order.
In the technical scheme of the application, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good custom of the public order.
"it should be noted that the head model in this embodiment is not a head model for a specific user, and cannot reflect personal information of a specific user";
"the two-dimensional face image in the present embodiment is derived from a public data set", and the like.
There is also provided, in accordance with an embodiment of the present application, an electronic device, a readable storage medium, and a computer program product.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome. The server may also be a server of a distributed system, or a server incorporating a blockchain.
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge map technology and the like.
Cloud computing (cloud computing) refers to a technology system that accesses a flexibly extensible shared physical or virtual resource pool through a network, where resources may include servers, operating systems, networks, software, applications, storage devices, and the like, and may be deployed and managed in a self-service manner as needed. Through the cloud computing technology, high-efficiency and strong data processing capacity can be provided for technical application and model training of artificial intelligence, block chains and the like.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (19)
1. A high-precision map feature extraction method comprises the following steps:
acquiring at least two frames of road images acquired in different lanes in the same road area;
respectively extracting the original geometric features of the pavement markers in the at least two frames of road images;
and carrying out fusion processing on at least two original geometric characteristics to obtain the target geometric characteristics of the pavement marker.
2. The method according to claim 1, wherein the fusing the at least two original geometric features to obtain the target geometric feature of the pavement marker comprises:
determining an intersection region and a non-intersection region of at least two of the original geometric features;
extracting candidate local features of at least two original geometric features in the intersection region, and determining a first local feature of the intersection region according to the confidence of the candidate local features in the intersection region;
extracting second local features of at least two original geometric features in the non-intersection region;
and determining the target geometric characteristic of the pavement marker according to the first local characteristic and the second local characteristic.
3. The method of claim 2, wherein the determining the first local feature of the intersection region according to the confidence of the candidate local feature at the intersection region comprises:
dividing the intersection area into at least two sub-areas;
and determining the first local feature of the intersection region according to the confidence degree of the candidate local feature in each sub-region.
4. The method of claim 2 or 3, further comprising:
and determining the confidence of the candidate local features in the intersection area or the sub-area according to the distance value between the intersection area position or the sub-area position and the acquisition position of the candidate local features.
5. The method according to claim 1, wherein, if the road region includes at least two road surface markers, performing fusion processing on at least two of the original geometric features to obtain a target geometric feature of the road surface marker, includes:
determining at least two original geometric features associated with the same pavement marker according to the intersection-parallel ratio relation among the original geometric features;
and carrying out fusion processing on at least two original geometric characteristics associated with the same pavement marker to obtain the target geometric characteristics of the at least two pavement markers.
6. The method of claim 1, wherein separately extracting original geometric features of the road surface markers in the at least two frames of road images comprises:
determining a road top view of the at least two frames of road images;
and respectively extracting pavement marking areas from the at least two frames of road top views, and performing feature extraction on the pavement marking areas to obtain original geometric features of the pavement markings in the at least two frames of road images.
7. The method of claim 6, wherein determining a road top view of the at least two frames of road images comprises:
and determining a corresponding road top view of the road area under each lane according to at least two frames of road images and at least two frames of point cloud data acquired for the road area in each lane.
8. The method of any of claims 1-7, wherein the road surface marking is a lane-crossing road surface marking.
9. A high-precision map feature extraction device includes:
the road image acquisition module is used for acquiring at least two frames of road images acquired in different lanes in the same road area;
the original geometric feature extraction module is used for respectively extracting original geometric features of the pavement markers in the at least two frames of road images;
and the original geometric feature fusion module is used for carrying out fusion processing on at least two original geometric features to obtain the target geometric features of the pavement marker.
10. The apparatus of claim 9, wherein the raw geometry feature fusion module comprises:
an intersection region determining submodule for determining an intersection region and a non-intersection region of at least two of the original geometric features;
a first local feature determining submodule, configured to extract candidate local features of the at least two original geometric features in the intersection region, and determine a first local feature of the intersection region according to a confidence of the candidate local features in the intersection region;
a second local feature extraction submodule, configured to extract a second local feature of the at least two original geometric features in the non-intersection region;
and the target geometric characteristic determining submodule is used for determining the target geometric characteristic of the pavement marker according to the first local characteristic and the second local characteristic.
11. The apparatus of claim 10, wherein the first local feature determination submodule comprises:
a sub-region dividing unit, configured to divide the intersection region into at least two sub-regions;
and the first local feature determining unit is used for determining the first local feature of the intersection region according to the confidence degrees of the candidate local features in the sub-regions.
12. The apparatus of claim 10 or 11, further comprising: and the confidence determining module is specifically configured to determine the confidence of the candidate local features in the intersection region or the sub-region according to the distance value between the intersection region position or the sub-region position and the acquisition position of the candidate local features.
13. The apparatus of claim 9, wherein if the road region contains at least two road surface markers, the raw geometry feature fusion module comprises:
the associated original geometric feature determining submodule is used for determining at least two original geometric features associated with the same pavement marker according to the intersection-parallel ratio relation among the original geometric features;
and the target geometric characteristic determining submodule is used for carrying out fusion processing on at least two original geometric characteristics associated with the same pavement marker to obtain the target geometric characteristics of the at least two pavement markers.
14. The apparatus of claim 9, wherein the raw geometry extraction module comprises:
the road top view determining submodule is used for determining a road top view of the at least two frames of road images;
and the road surface identification region feature extraction submodule is used for extracting the road surface identification regions from the at least two frames of road top views respectively and extracting the features of the road surface identification regions to obtain the original geometric features of the road surface identifications in the at least two frames of road images.
15. The apparatus according to claim 14, wherein the road top view determining sub-module is configured to determine a corresponding road top view of the road area under each lane according to at least two frames of road images and at least two frames of point cloud data acquired for the road area in each lane.
16. The apparatus of any one of claims 9-15, wherein the road surface marking is a lane-crossing road surface marking.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111272599.0A CN114037966A (en) | 2021-10-29 | 2021-10-29 | High-precision map feature extraction method, device, medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111272599.0A CN114037966A (en) | 2021-10-29 | 2021-10-29 | High-precision map feature extraction method, device, medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114037966A true CN114037966A (en) | 2022-02-11 |
Family
ID=80135843
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111272599.0A Pending CN114037966A (en) | 2021-10-29 | 2021-10-29 | High-precision map feature extraction method, device, medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114037966A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114626462A (en) * | 2022-03-16 | 2022-06-14 | 小米汽车科技有限公司 | Pavement mark recognition method, device, equipment and storage medium |
CN114677570A (en) * | 2022-03-14 | 2022-06-28 | 北京百度网讯科技有限公司 | Road information updating method, device, electronic equipment and storage medium |
CN115100426A (en) * | 2022-06-23 | 2022-09-23 | 高德软件有限公司 | Information determination method and device, electronic equipment and computer program product |
CN115438516A (en) * | 2022-11-07 | 2022-12-06 | 阿里巴巴达摩院(杭州)科技有限公司 | Simulation map generation method, electronic device and computer storage medium |
-
2021
- 2021-10-29 CN CN202111272599.0A patent/CN114037966A/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114677570A (en) * | 2022-03-14 | 2022-06-28 | 北京百度网讯科技有限公司 | Road information updating method, device, electronic equipment and storage medium |
CN114626462A (en) * | 2022-03-16 | 2022-06-14 | 小米汽车科技有限公司 | Pavement mark recognition method, device, equipment and storage medium |
CN115100426A (en) * | 2022-06-23 | 2022-09-23 | 高德软件有限公司 | Information determination method and device, electronic equipment and computer program product |
CN115100426B (en) * | 2022-06-23 | 2024-05-24 | 高德软件有限公司 | Information determination method, apparatus, electronic device and computer program product |
CN115438516A (en) * | 2022-11-07 | 2022-12-06 | 阿里巴巴达摩院(杭州)科技有限公司 | Simulation map generation method, electronic device and computer storage medium |
CN115438516B (en) * | 2022-11-07 | 2023-03-24 | 阿里巴巴达摩院(杭州)科技有限公司 | Simulation map generation method, electronic device and computer storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110148196B (en) | Image processing method and device and related equipment | |
US10074020B2 (en) | Vehicular lane line data processing method, apparatus, storage medium, and device | |
WO2018068653A1 (en) | Point cloud data processing method and apparatus, and storage medium | |
CN114037966A (en) | High-precision map feature extraction method, device, medium and electronic equipment | |
CN113989450B (en) | Image processing method, device, electronic equipment and medium | |
CN111950345B (en) | Camera identification method and device, electronic equipment and storage medium | |
CN115410173B (en) | Multi-mode fused high-precision map element identification method, device, equipment and medium | |
CN114443794A (en) | Data processing and map updating method, device, equipment and storage medium | |
CN113971723A (en) | Method, device, equipment and storage medium for constructing three-dimensional map in high-precision map | |
EP4080479A2 (en) | Method for identifying traffic light, device, cloud control platform and vehicle-road coordination system | |
CN115841552A (en) | High-precision map generation method and device, electronic equipment and medium | |
CN113742440B (en) | Road image data processing method and device, electronic equipment and cloud computing platform | |
CN113011298B (en) | Truncated object sample generation, target detection method, road side equipment and cloud control platform | |
CN113297878B (en) | Road intersection identification method, device, computer equipment and storage medium | |
CN113887391A (en) | Method and device for recognizing road sign and automatic driving vehicle | |
CN114724113B (en) | Road sign recognition method, automatic driving method, device and equipment | |
CN116434181A (en) | Ground point detection method, device, electronic equipment and medium | |
CN111860084A (en) | Image feature matching and positioning method and device and positioning system | |
CN115761698A (en) | Target detection method, device, equipment and storage medium | |
CN114333409A (en) | Target tracking method and device, electronic equipment and storage medium | |
CN113806361B (en) | Method, device and storage medium for associating electronic monitoring equipment with road | |
CN113177481B (en) | Target detection method, target detection device, electronic equipment and storage medium | |
CN117854038A (en) | Construction area acquisition method and device, electronic equipment and automatic driving vehicle | |
KR20220119167A (en) | Method and apparatus for identifying vehicle lane departure, electronic device, and storage medium | |
CN114155508A (en) | Road change detection method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |