CN115512316A - Static obstacle grounding contour line multi-frame fusion method, device and medium - Google Patents

Static obstacle grounding contour line multi-frame fusion method, device and medium Download PDF

Info

Publication number
CN115512316A
CN115512316A CN202110622705.7A CN202110622705A CN115512316A CN 115512316 A CN115512316 A CN 115512316A CN 202110622705 A CN202110622705 A CN 202110622705A CN 115512316 A CN115512316 A CN 115512316A
Authority
CN
China
Prior art keywords
frame
points
frame point
point cluster
single frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110622705.7A
Other languages
Chinese (zh)
Inventor
吴军
康雪杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Momenta Suzhou Technology Co Ltd
Original Assignee
Momenta Suzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Momenta Suzhou Technology Co Ltd filed Critical Momenta Suzhou Technology Co Ltd
Priority to CN202110622705.7A priority Critical patent/CN115512316A/en
Priority to PCT/CN2021/109538 priority patent/WO2022252380A1/en
Publication of CN115512316A publication Critical patent/CN115512316A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a static obstacle grounding contour line multi-frame fusion method, device and medium, and belongs to the technical field of automatic driving. The method comprises the following steps: a pretreatment process; a single frame point acquisition process; a single frame point clustering process; a single frame point cluster matching process; and a multi-frame fusion process, which is to convert all the single frame points matched into any one point cluster in the single frame point cluster matching process into a global Cartesian coordinate system, grid the periphery of the global Cartesian coordinate system to obtain a temporary map, and store the information of a grid to a stable map if the total occurrence frequency of the corresponding single frame points of the continuous multi-frame environment images in the grid of the temporary map is greater than the threshold value of the occurrence frequency of the single frame points. According to the static obstacle grounding contour line multi-frame fusion method, the complexity of the calculation process is close to linearity, so that the calculation amount is small, the processing speed is high, and the requirements of automatic driving on the identification speed, the identification accuracy and the identification stability of the static obstacle can be met.

Description

Static obstacle grounding contour line multi-frame fusion method, device and medium
Technical Field
The application relates to the technical field of automatic driving, in particular to a static obstacle grounding contour line multi-frame fusion method, device and medium.
Background
In the automatic driving process, in order to ensure the driving safety, the automatic driving route needs to be planned in real time so as to avoid touching and even impacting possible obstacles. This requires that the exact location of the obstacle be identified and determined dynamically in real time during autonomous driving. In the prior art, most of the detection of obstacles is to directly measure the distance between the obstacle and an autonomous vehicle by mounting a detection device such as a laser radar or an ultrasonic sensor on the vehicle. However, the cost of devices such as laser radar and ultrasonic sensor is high, and when vehicle obstacle avoidance planning is performed by directly using the measured distance between the obstacle and the running vehicle, the required calculation amount is huge, and the delay of data processing and transmission is too large, so that the device is difficult to be directly applied to automatic driving with high real-time requirements. Meanwhile, the stability of data is poor, a large amount of noise exists, and the method is directly applied to judgment of an algorithm which can be seriously interfered by automatic driving.
Disclosure of Invention
The application provides a static obstacle grounding contour line multi-frame fusion method, device and medium, aiming at the technical problems that in the prior art, when vehicle obstacle avoidance planning is carried out, the required calculated amount is too large, and therefore data processing and transmission delay is too large.
In one embodiment of the present application, a static obstacle grounding contour line multi-frame fusion method is provided, which includes: a preprocessing process, namely extracting obstacle contour points aiming at each frame of environment image shot by cameras arranged around a vehicle, and filtering and discarding points out of a preset geometric range in the obstacle contour points to obtain filtered obstacle contour points; a single frame point obtaining process, wherein filtered obstacle contour points corresponding to each frame of environment images are converted into a polar coordinate system taking the center of the vehicle as the origin of the polar coordinate system, and the closest point to the origin of the polar coordinate system is reserved as a single frame point from all the filtered obstacle contour points with the same angle coordinate; the single frame point clustering process is that after all the single frame points are sequenced according to the size sequence of the angle coordinate values, for each single frame point, another single frame point which is the nearest to the single frame point is taken, the angle coordinate difference value and the relative distance between the two single frame points are calculated, if the angle coordinate difference value is smaller than the angle coordinate difference threshold value, and the relative distance is smaller than the relative distance threshold value, the two single frame points are clustered to the same single frame point cluster; a single frame point cluster matching process, namely traversing a current frame point cluster belonging to a current frame environment image and a historical frame point cluster belonging to an adjacent historical frame environment image in the single frame point clusters, and matching the current frame point cluster and the historical frame point cluster to the same point cluster class if the distance between the current frame point cluster and the historical frame point cluster is less than a first distance threshold value and the coincidence degree of the mutual projection of the current frame point cluster and the historical frame point cluster is greater than a coincidence degree threshold value; and a multi-frame fusion process, which is to convert all the single frame points matched into any one point cluster in the single frame point cluster matching process into a global Cartesian coordinate system, grid the periphery of the global Cartesian coordinate system to obtain a temporary map, and store the information of a grid to a stable map if the total occurrence frequency of the corresponding single frame points of the continuous multi-frame environment images in the grid of the temporary map is greater than the threshold value of the occurrence frequency of the single frame points.
In another aspect of the present application, there is provided a static obstacle ground contact profile multi-frame fusion device, including: the system comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for extracting obstacle contour points aiming at each frame of environment images shot by cameras arranged around a vehicle, and filtering and discarding points which are positioned outside a preset geometric range in the obstacle contour points to obtain filtered obstacle contour points; the single frame point acquisition module is used for converting the filtered obstacle contour points corresponding to each frame of environment image into a polar coordinate system taking the center of the vehicle as the origin of the polar coordinate system, and reserving the points closest to the origin of the polar coordinate system among all the filtered obstacle contour points with the same angular coordinates as single frame points; the single-frame point clustering module is used for sorting all single-frame points according to the size sequence of the angle coordinate values, taking another single-frame point which is most adjacent to each single-frame point, calculating the angle coordinate difference value and the relative distance between the two single-frame points, and clustering the two single-frame points to the same single-frame point cluster if the angle coordinate difference value is smaller than the angle coordinate difference threshold value and the relative distance is smaller than the relative distance threshold value; the single frame point cluster matching module is used for traversing a current frame point cluster belonging to a current frame environment image and a historical frame point cluster belonging to an adjacent historical frame environment image in the single frame point clusters, and matching the current frame point cluster and the historical frame point cluster to the same point cluster class if the distance between the current frame point cluster and the historical frame point cluster is smaller than a first distance threshold value and the coincidence degree of the mutual projection between the current frame point cluster and the historical frame point cluster is larger than a coincidence degree threshold value; and the multi-frame fusion module is used for converting all the single frame points matched into any one point cluster in the single frame point cluster matching process into a global Cartesian coordinate system and rasterizing the periphery of the global Cartesian coordinate system to obtain a temporary map, and if the total occurrence frequency of the corresponding single frame points of the continuous multi-frame environment images in one grid of the temporary map is greater than the threshold value of the occurrence frequency of the single frame points, storing the information of the grid into a stable map.
In another aspect of the present application, a computer-readable storage medium is provided, which stores computer instructions, and the computer instructions are operated to execute the static obstacle grounded contour line multi-frame fusion method in the above-mentioned aspect.
In another aspect of the present application, there is provided a computer apparatus including: a memory storing computer instructions; and the processor is used for operating the computer instructions to execute the static obstacle grounding contour line multi-frame fusion method in the technical scheme.
Through adopting above-mentioned technical scheme, this application can reach following technological effect: the complexity of the calculation process is close to linearity, so that the calculation amount is small, the processing speed is high, and the requirements of automatic driving on the identification speed, the identification accuracy and the identification stability of the static obstacle can be met.
Drawings
In order to more clearly illustrate the detailed description of the present application or the technical solutions in the prior art, the drawings required for the detailed description of the present application or the technical solutions in the prior art will be briefly described below. It is clear that the drawings described below are directed to some specific embodiments of the present application, and that other drawings corresponding to equivalent substitutions or modifications of the specific embodiments can be straightforwardly and unambiguously determined from these drawings by those skilled in the art without inventive effort.
FIG. 1 is a flow chart diagram illustrating one embodiment of a static obstacle grounded contour multi-frame fusion method according to the present application;
FIG. 2 is a diagram illustrating an example of a single frame point cluster matching process in one embodiment of the static obstacle grounded contour multi-frame fusion method of the present application;
FIG. 3 is a diagram illustrating an example of a multi-frame fusion process in one embodiment of the static obstacle grounded contour multi-frame fusion method of the present application;
fig. 4 is a schematic diagram showing the constituent modules of an embodiment of the static obstacle grounded contour line multi-frame fusion apparatus according to the present application.
In view of the foregoing drawings, it will be apparent to those skilled in the art that certain embodiments of the present application have been shown and described in considerable detail below. The drawings and the description are not intended to limit the scope of the inventive concepts of the present application in any way, but rather to enable those skilled in the art to more readily understand the inventive concepts of the present application through a few specific embodiments.
Detailed Description
In order to make the technical solutions of the present application more easily understood by those skilled in the art, some specific embodiments of the technical solutions of the present application will be clearly and completely described with reference to the accompanying drawings. It is obvious that the embodiments described are only a few embodiments of the present invention, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the description of embodiments already given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," "third," "fourth," etc., when used herein, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the numbers may be interchanged where appropriate so that the aspects of the application described herein may be implemented in other sequences than illustrated or described. Furthermore, the terms "comprises" and "comprising," and any variations thereof, are not intended to cover an exclusive inclusion, such that a method that comprises a list of steps or an article or device that comprises a plurality of elements or modules is not necessarily to be construed as limited to the listed steps, elements or modules, but may include other steps, elements or modules not expressly listed or inherent to such method, article or device.
In addition, a plurality of embodiments, a plurality of examples, and the like, which will be described in detail below with respect to different parts of the technical solutions of the present invention, may be combined with each other to form a complete technical solution of the present invention, unless mutually exclusive. For the same or similar concepts, procedures, etc. that have been described in a certain specific embodiment, example, etc., detailed description may not be repeated in other specific embodiments, examples, etc.
Some embodiments, examples, and the like of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of an embodiment of a static obstacle grounded contour line multi-frame fusion method according to the present application.
In the embodiment shown in fig. 1, the static obstacle grounded contour line multi-frame fusion method of the present application includes: s101, preprocessing.
In the preprocessing process represented by S101, for each frame of environment image captured by cameras disposed around the vehicle, the obstacle contour points are extracted, and points located outside the predetermined geometric range among the obstacle contour points are filtered out, so as to obtain filtered obstacle contour points.
In one example of this embodiment, the number of cameras deployed around the vehicle may be one or more. When the vehicle camera is plural, the plural vehicle cameras may be respectively installed at different positions of the vehicle. Each frame of environment image shot by the camera can be the frame of environment image obtained by fusing corresponding environment images shot by a plurality of vehicle cameras through a plurality of cameras.
In an embodiment of the present invention, in the process of extracting the obstacle contour points for each frame of environment image captured by the cameras disposed around the vehicle, a dedicated neural network algorithm based on deep learning may be used to extract the obstacle contour points, or other existing technologies may be used to extract the obstacle contour points.
In an embodiment of the present invention, the process of filtering out points located outside the predetermined geometric range from among the obstacle contour points to obtain filtered obstacle contour points may include: removing, among the obstacle contour points, points exceeding a view angle threshold of the camera, and removing points having a distance from the camera smaller than a second distance threshold or larger than a third distance threshold, wherein the second distance threshold is smaller than the third distance threshold.
In one example of this embodiment, the viewing angle threshold may be any value between 90 and 150 degrees, the second distance threshold may be any value between 3 and 7 meters, and the third distance threshold may be any value between 8 and 12 meters. Optionally, the included angle of the view angle is 120 degrees, the second distance threshold is 6 meters, and the third distance threshold is 10 meters. These thresholds are related to the range of shooting distances with good shooting accuracy of the vehicle camera, and when different vehicle cameras are used, the thresholds can be adjusted accordingly, even beyond the above range.
In one embodiment of the present invention, for each frame of environment image captured by cameras disposed around a vehicle, the obstacle contour points exceeding the view angle threshold of the vehicle camera are removed, and the pixel points whose distance from the camera is too small or too large are removed. Therefore, in the subsequent single frame point acquisition process represented by S101, only the obstacle contour points meeting the threshold requirement can be processed, the processing pertinence is increased, the calculation amount of the processing is reduced, unnecessary excessive noise is reduced, and the processing speed of the technical scheme of the present application is increased, so that the technical scheme of the present application can be better applied to automatic driving with a high real-time requirement.
In the embodiment shown in fig. 1, the static obstacle grounded contour line multi-frame fusion method of the present application includes: s102, single frame point acquisition process.
In the single frame point acquisition process represented by S102, the filtered obstacle contour points corresponding to each frame of the environment image are converted into a polar coordinate system using the center of the vehicle as the origin of the polar coordinate system, and a point closest to the origin of the polar coordinate system is reserved as a single frame point among all the filtered obstacle contour points having the same angular coordinate.
In one example of the present embodiment, the center of the vehicle and the origin of the polar coordinate system may be the intersection of the diagonals of a quadrilateral formed by the four corners of the vehicle.
In an example of the present embodiment, the number of coordinate values of the angular coordinate in the polar coordinate system may be an integer multiple or an integer fraction of 360, for example, 360, 720, 180, or the like. The following description will be given taking an example in which the number of coordinate values of the plurality of angular coordinate values in the polar coordinate system is 360.
When the number of the coordinate values of the plurality of angular coordinates of the polar coordinate system is 360, that is, the vehicle center is taken as the origin of the polar coordinate system, the vehicle can rotate for one circle by 360 degrees, so that 360 rays with the origin as the starting point in the polar coordinate system are formed with the resolution of 1 degree. Using the origin as P 0 And the angular coordinate values corresponding to 360 rays are represented by theta, so that the angular coordinate value theta of the polar coordinate system has 360 values.
As can be seen from the above, each value of the angular coordinate value θ of the polar coordinate system corresponds to a ray with the origin of the polar coordinate system as the starting point. The ray is projected in each frame of environment image shot by the vehicle camera, if the ray is projected to one or more pixel points, the ray is considered to be corresponding points projected to the outline of the obstacle, the pixel point with the minimum distance coordinate value of the polar coordinate system, namely the obstacle outline point closest to the center of the vehicle in the projected pixel points is used as a single frame point, and the single frame point is considered to be the corresponding point on the grounding outline line of the obstacle facing to the direction of the vehicle camera. In this case, the single frame point is recorded and retained. Alternatively, the recording of the single frame point may be recording the coordinates of the single frame point.
Similarly, the corresponding specific embodiment when the number of the coordinate values of the plurality of angular coordinate values of the polar coordinate system is other numbers can be obtained.
In an embodiment of the present invention, the single frame point obtaining process represented by S102 may further include a distance smoothing process, that is, dividing the single frame point into a plurality of adjacent groups at equal intervals according to the corresponding angle coordinate, and performing weighted average on the distance coordinates of the plurality of single frame points in each group of single frame points by using a gaussian algorithm.
In the distance smoothing process of this embodiment, taking the case where the number of coordinate values of the plurality of angular coordinate values of the polar coordinate system is 360 in the above embodiment as an example, the single frame point may be divided into a plurality of groups with an angular coordinate interval of 5 degrees as one group.
Alternatively, within each group, for a single frame point whose distance coordinate value falls outside a predetermined distance range, its distance coordinate value may be reset to zero. The predetermined distance range is related to a shooting distance range in which the shooting accuracy of the vehicle camera is good, and for example, the predetermined distance range may be 0.5 to 6.5 meters from the coordinate center of the vehicle camera. When different vehicle cameras are used, the predetermined distance range can be adjusted accordingly, regardless of the numerical range.
In addition, optionally, the distance coordinates of the single frame point can be weighted and averaged by a Gaussian algorithm by adopting the weight {1/9,2/9,3/9,2/9,1/9 }.
The distance coordinates of each group of single frame points are weighted and averaged by using a Gaussian algorithm, the distance coordinates can be smoothed, missing loopholes can be filled when the connection lines formed by the groups of single frame points are connected, and therefore the smoothness degree of the grounding contour line of the static obstacle obtained by using the technical scheme of the application can be improved.
In this embodiment, the single-frame point obtaining process represented by S102 may further include a boundary gradient sharpening process, that is, a sobel operator is used to further perform weighted average on the distance coordinates of the plurality of single-frame points in each group of single-frame points processed by the distance smoothing process. In this way, it may act to sharpen the boundary gradient. Therefore, the lines of the grounding contour line of the static obstacle obtained by the technical scheme of the application are clearer.
Optionally, taking the specific example of the distance smoothing process as an example, the distance coordinates after weighted averaging of the distance coordinates of the single frame point by using a gaussian algorithm may be further sharpened by using sobel operator weights { -1, -2,0,2,1} to weight {1/9,2/9,3/9,2/9,1/9 }.
In the embodiment shown in fig. 1, the static obstacle grounded contour line multi-frame fusion method of the present application further includes: and S103, single frame point clustering process.
In the single frame point clustering process represented by S103, after all the single frame points are sorted according to the magnitude order of the angular coordinate values, for each single frame point, another single frame point that is closest to the single frame point is taken, the angular coordinate difference and the relative distance between the two single frame points are calculated, and if the angular coordinate difference is smaller than the angular coordinate difference threshold and the relative distance is smaller than the relative distance threshold, the two single frame points are clustered to the same single frame point cluster.
In the single frame point clustering process, all the single frame points are firstly sequenced according to the magnitude sequence of the angle coordinate values, and then another single frame point which is closest to each single frame point is taken to calculate the angle coordinate difference and the relative distance between the single frame points. Rather than any two single frame points going through two by two out of order. Therefore, the calculation complexity can be greatly reduced, the processing speed of the technical scheme of the application is improved, and the technical scheme of the application can be better suitable for automatic driving with high real-time requirements.
In one embodiment of the present application, the relative distance may be a distance between two corresponding single frame points, and the distance may be calculated by polar coordinate system angular coordinates and distance coordinates of the two single frame points, or may be calculated by coordinates of a global cartesian coordinate system of the two single frame points.
In the embodiment, all the paired single-frame points smaller than the angular coordinate difference threshold and the relative distance threshold are clustered into the same single-frame point cluster, so that the single-frame points in the same single-frame point cluster are ensured to belong to points on the ground contour line of the same static obstacle.
In an embodiment of the present invention, the threshold value of the difference between the two single frame points in the polar coordinate system with the vehicle center as the origin may be any angle value in a range of 3 to 6 degrees, and optionally, the threshold value of the difference between the two single frame points is 5 degrees. And, the relative distance threshold of the two single frame points may be any value within the range of 0.3-0.7 m, and optionally, the relative distance threshold is 0.5 m.
In another embodiment of this embodiment, the single frame point clustering process represented by S103 may further include a single frame point cluster removing process, that is, removing a single frame point cluster containing a number of single frame points smaller than a threshold number of single frame points.
Alternatively, the single frame point number threshold may be a natural number from 5 to 10.
The single frame point clusters containing the single frame points with the number smaller than the single frame point number threshold are removed, the temporarily appeared dynamic obstacle images or images which are not obstacles are prevented from being mistaken for static obstacles, the accuracy of obstacle identification is ensured, and the accuracy of the grounding contour line of the static obstacles is further ensured.
In the embodiment shown in fig. 1, the static obstacle grounded contour line multi-frame fusion method of the present application further includes: and S104, single frame point cluster matching.
In the single frame point cluster matching process represented by S103, traversing a current frame point cluster belonging to the current frame environment image and a historical frame point cluster belonging to the historical frame environment image in the single frame point cluster, and if a distance between the current frame point cluster and the historical frame point cluster is smaller than a first distance threshold and a coincidence degree of mutual projections of the current frame point cluster and the historical frame point cluster is greater than a coincidence degree threshold, matching the current frame point cluster and the historical frame point cluster to the same point cluster class.
In an embodiment of the present invention, a distance between the current frame point cluster and the historical frame point cluster may be a horizontal coordinate distance between the current frame point cluster and the historical frame point cluster in a cartesian coordinate system. The coincidence degree of the mutual projection of the current frame point cluster and the historical frame point cluster can be the coincidence degree between horizontal coordinate ranges covered by the current frame point cluster and the historical frame point cluster in a cartesian coordinate system.
Optionally, the horizontal coordinate distance between the current frame point cluster and the historical frame point cluster in the cartesian coordinate system may be a distance between a feature point capable of representing the current frame point cluster and a projection point in the horizontal coordinate plane between feature points capable of representing the historical frame point cluster.
Optionally, the feature point of the current frame point cluster may be a centroid point of all or part of the single frame points in the current frame point cluster, and the feature point of the historical frame point cluster may be a centroid point of all or part of the single frame points in the historical frame point cluster.
In particular, as shown in fig. 2, the feature point of the current frame point cluster may be a midpoint C1 of a connecting line between two end points P1 and P2 that are farthest from each other among projection points of single frame points in the current frame point cluster on a horizontal coordinate plane, and the feature point of the historical frame point cluster may be a midpoint C2 of a connecting line between two end points Q1 and Q2 that are farthest from each other among projection points of single frame points in the historical frame point cluster on the horizontal coordinate plane.
Correspondingly, the horizontal coordinate distance between the current frame point cluster and the historical frame point cluster in the cartesian coordinate system is the distance between a midpoint C1 of a connecting line between two end points P1 and P2 which are farthest away from each other among projection points of single frame points in the current frame point cluster on the horizontal coordinate plane and a midpoint C2 of a connecting line between two end points Q1 and Q2 which are farthest away from each other among projection points of single frame points in the historical frame point cluster on the horizontal coordinate plane.
In this embodiment, the first distance threshold may be any value in the range of 0.3-0.7 meters, and optionally, the first distance threshold is 0.5 meters. Therefore, under the condition that the distance between the current frame point cluster and the historical frame point cluster is smaller than the first distance threshold value, the grounding contour lines of the same static obstacle can be ensured to belong to the current frame point cluster and the historical frame point cluster.
In an embodiment of the present invention, the overlapping degree of the mutual projections of the current frame point cluster and the historical frame point cluster may be the overlapping degree between horizontal coordinate ranges covered by the current frame point cluster and the historical frame point cluster in a cartesian coordinate system.
In particular, as shown in fig. 2, the overlapping degree between the horizontal coordinate ranges covered by the current frame point cluster and the historical frame point cluster in the cartesian coordinate system may be a connecting line between two end points P1 and P2 that are farthest from each other among the projection points of the single frame point in the current frame point cluster on the horizontal coordinate plane, a connecting line between two end points Q1 and Q2 that are farthest from each other among the projection points of the single frame point in the historical frame point cluster on the horizontal coordinate plane, and an overlapping degree of mutual projection in the horizontal coordinate plane of the cartesian coordinate system. That is, the length of the projected line segment Q1'P2 obtained by mutual projection accounts for the length of the line segment P1P2 in the ratio | Q1' -P2|/| P1-P2|, and the length of the line segment Q1P2 'accounts for the length of the line segment Q1Q2 in the ratio | Q1-P2' |/| Q1-Q2|, where Q1 'is the projection point of Q1 on the line segment P1P2, and P2' is the projection point of P2 on the line segment Q1Q 2.
In this embodiment, the threshold value of the degree of coincidence may be any value in the range of 0.3 to 0.7, and optionally, the threshold value of the degree of coincidence is 0.5.
Under the condition that the coincidence degree of the mutual projection of the current frame point cluster and the historical frame point cluster is larger than the coincidence degree threshold value, the current frame point cluster and the historical frame point cluster can be ensured to always belong to the grounding contour line of the same static obstacle.
In the embodiment shown in fig. 1, the static obstacle grounded contour line multi-frame fusion method further includes: and S105, multi-frame fusion process.
In the multi-frame fusion process represented by S105, all the single-frame points matched to any one point cluster in the single-frame point cluster matching process are converted to a global Cartesian coordinate system and are rasterized around the global Cartesian coordinate system to obtain a temporary map, and if the total occurrence frequency of the corresponding single-frame points of the continuous multi-frame environment images in one grid of the temporary map is greater than the threshold value of the occurrence frequency of the single-frame points, the information of the grid is stored in a stable map.
Here, the stable map is taken as the final result of the multi-frame fusion process indicated by S105.
In an embodiment of the present invention, the threshold of the number of occurrences of a single frame point may be any natural number from 3 to 6, and optionally, the threshold of the number of occurrences of a single frame point may be 4.
Only when the total occurrence frequency of the corresponding single frame points of the continuous multi-frame environment images in one grid of the temporary map is larger than the threshold value of the occurrence frequency of the single frame points, the information of the grid is stored in the stable map, so that the temporarily-occurring dynamic obstacle images or the images which are not the obstacles can be prevented from being mistaken as static obstacles, the accuracy of obstacle identification is ensured, and the accuracy of the grounding contour lines of the static obstacles is further ensured.
By storing only the statistical average position of a plurality of single frame points appearing in the grid, instead of storing the position information of a plurality of single frame points, the statistical average position can be updated in real time without storing all the single frame points, accordingly reducing the amount of computation and reducing the excessive requirement on the storage space.
In an embodiment of this embodiment, the information of the grid may include: global cartesian coordinate information of the grid, a statistical average location of single frame points occurring in the grid, and a total number of occurrences of single frame points occurring in the grid.
As an example of this embodiment, the statistical mean position may be the geometric center mean position of a single frame point within the corresponding grid, as shown in fig. 3.
In fig. 3, when a single frame point falls into a certain grid in N consecutive frames, the grid is used as an active grid, for example, solid points in the grid in fig. 3 represent the single frame points, and hollow points in the upper right grid and the lower right grid represent the average positions of the geometric centers of a plurality of solid points in the grid, that is, the positions represented by the coordinates obtained by averaging the geometric center coordinates of all the solid points in the grid.
In an embodiment of this embodiment, the multi-frame fusion process represented by S105 may further include a map updating process, that is, for the current frame environment image, it is determined whether a ratio of all corresponding single frame points acquired in the current frame acquisition process to any grid already stored in the stable map is greater than a hit rate threshold, if so, all the single frame points corresponding to the current frame image are added to the stable map, otherwise, only information of a corresponding grid where the corresponding single frame point appears in the temporary map is updated.
In this embodiment, "hit" refers to "hit" all grids already stored in the stable map, i.e., all the single frame points corresponding to the new environment image of each frame and the current frame environment image obtained in the current frame obtaining process shown in S102. Among all the single frame points of the current frame environment image, the "hit", that is, the point falling into any one of the grids stored in the stable map, occupies the proportion of all the single frame points of the current frame environment image, that is, the "hit rate". The hit rate threshold may be any value in the range of 65% to 100%, alternatively the hit rate threshold may be 70%,75%,80%,85%.
In this embodiment, when the hit rate is greater than the hit rate threshold, it is determined that the obstacle represented by the single frame point in the current frame is stably present in all the previous multi-frame environment images.
Otherwise, if the hit rate is not greater than the hit rate threshold, it is determined that the obstacle represented by the single frame point in the current frame does not stably exist in the multi-frame environment image before the time. Therefore, only the information of these single frame points at the corresponding grid of the temporary map is updated.
In an embodiment of the present invention, each grid may calculate a key value using a hash table, so that the temporary map and the stable map are represented by the hash table.
Also, as an example, cartesian coordinate information of the activation grid may be used as a key value of the hash table, for example, the key value may be a sum of an abscissa value and an ordinate value of the corresponding activation grid. In this way, whether the cartesian coordinate information of a certain activation grid exists or not can be queried among a plurality of key values of the hash table, if the cartesian coordinate information of the certain activation grid does not exist, it is indicated that the relevant information of the activation grid is not stored in the temporary map and/or the stable map before, and the cartesian coordinate information of the activation grid, the statistical average position respectively associated with the cartesian coordinate information of each activation grid, and the occurrence times of the single frame points respectively associated with the cartesian coordinate information of each activation grid are added to the hash table; if the information is found, it is indicated that the relevant information of the activation grid is stored in the stable map before, and only the corresponding statistical average position and the corresponding number of occurrences of the single frame point are updated in the hash table.
In this embodiment, by using a temporary map such as a hash table and a stable map, by activating such information as coordinate information of a grid, it is possible to realize a quick inquiry of the temporary map and the stable map to determine whether it is necessary to add related information of a new multi-frame fusion point or update related information of a historical multi-frame fusion point. The multi-frame fusion point refers to a point corresponding to a corresponding statistical average position obtained through the above static obstacle grounding contour line multi-frame fusion scheme of the present application. Therefore, the algorithm time complexity of the static obstacle grounding contour line multi-frame fusion scheme can be extremely reduced, and the algorithm time complexity can be close to linearity, so that the calculation speed of the scheme is high, and the method and the device can be well suitable for automatic driving with high real-time requirements.
Fig. 4 is a schematic diagram showing the constituent modules of an embodiment of the static obstacle grounded contour line multi-frame fusion device.
In this specific embodiment, the static obstacle ground contour line multi-frame fusion device includes: the preprocessing module 401 extracts the obstacle contour points for each frame of environment image shot by the cameras deployed around the vehicle, and filters and discards points outside a predetermined geometric range from the obstacle contour points to obtain filtered obstacle contour points; a single frame point obtaining module 402, configured to convert the filtered obstacle contour points corresponding to each frame of the environment image into a polar coordinate system using the center of the vehicle as the origin of the polar coordinate system, and keep the closest point to the origin of the polar coordinate system among all the filtered obstacle contour points having the same angular coordinate as a single frame point; the single frame point clustering module 403, after sorting all the single frame points according to the magnitude order of the angular coordinate values, for each single frame point, taking another single frame point that is closest to the single frame point, calculating an angular coordinate difference value and a relative distance between the two single frame points, and if the angular coordinate difference value is smaller than an angular coordinate difference threshold value and the relative distance is smaller than a relative distance threshold value, clustering the two single frame points to the same single frame point cluster; a single frame point cluster matching module 404, which traverses a current frame point cluster belonging to the current frame environment image and a historical frame point cluster belonging to an adjacent historical frame environment image in the single frame point cluster, and matches the current frame point cluster and the historical frame point cluster to the same point cluster class if the distance between the current frame point cluster and the historical frame point cluster is smaller than a first distance threshold and the coincidence degree of the mutual projection between the current frame point cluster and the historical frame point cluster is greater than a coincidence degree threshold; and a multi-frame fusion module 405, converting all single frame points matched to any one point cluster in the single frame point cluster matching process into a global Cartesian coordinate system, rasterizing the surrounding of the global Cartesian coordinate system to obtain a temporary map, and if the total occurrence frequency of the corresponding single frame points of the continuous multi-frame environment images in a grid of the temporary map is greater than the threshold value of the occurrence frequency of the single frame points, storing the information of the grid to a stable map.
In this specific embodiment, the preprocessing module 401, the single frame point obtaining module 402, the single frame point clustering module 403, the single frame point cluster matching module 404, and the multi-frame fusion module 405 may specifically execute corresponding processes described in the above specific embodiments, examples, and the like of the static obstacle ground contour multi-frame fusion method shown in fig. 1, the single frame point obtaining process S102, the single frame point clustering process S103, the single frame point cluster matching process S104, and the multi-frame fusion process S105, and can achieve corresponding technical effects achieved by corresponding processes described in the above specific embodiments, examples, and the like of the static obstacle ground contour multi-frame fusion method shown in fig. 1.
In a specific embodiment of the present application, a computer-readable storage medium is provided, which stores computer instructions, wherein the computer instructions are operated to execute the static obstacle grounded contour multi-frame fusion method described in any one of the above embodiments, examples, and the like. The storage medium stores computer instructions that may be stored directly in hardware, in a software module executed by a processor, or in a combination of the two.
A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium that may be used to store computer instructions. In general, a storage medium may be under the control of a processor such that the processor can read information from, and write information to, the storage medium.
The Processor may be a Central Processing Unit (CPU), other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), other Programmable logic devices, discrete Gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one embodiment of the present application, there is provided a computer device comprising a processor and a memory, the memory storing computer instructions, wherein: the processor operates the computer instructions to perform the static obstacle grounded contour multi-frame fusion method described in any of the above embodiments, examples, etc.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the technical solution of the present application.
The above embodiments are merely examples, which are not intended to limit the scope of the present disclosure, and all equivalent structural changes made by using the contents of the specification and the drawings, or any other related technical fields, are also included in the scope of the present disclosure.

Claims (13)

1. A static obstacle grounding contour line multi-frame fusion method is characterized by comprising the following steps:
a preprocessing process, namely extracting obstacle contour points aiming at each frame of environment image shot by cameras arranged around a vehicle, and filtering and discarding points out of a preset geometric range in the obstacle contour points to obtain filtered obstacle contour points;
a single frame point obtaining process, converting the filtered obstacle contour points corresponding to each frame of environment image into a polar coordinate system taking the center of the vehicle as the origin of the polar coordinate system, and reserving the points closest to the origin of the polar coordinate system as single frame points among all the filtered obstacle contour points with the same angular coordinate;
a single frame point clustering process, wherein after all the single frame points are sequenced according to the magnitude sequence of the angular coordinate values, for each single frame point, another single frame point which is the nearest to the single frame point is selected, the angular coordinate difference value and the relative distance between the two single frame points are calculated, and if the angular coordinate difference value is smaller than an angular coordinate difference threshold value and the relative distance is smaller than a relative distance threshold value, the two single frame points are clustered to the same single frame point cluster;
a single frame point cluster matching process, wherein a current frame point cluster belonging to a current frame environment image and a historical frame point cluster belonging to an adjacent historical frame environment image in the single frame point cluster are traversed, and if the distance between the current frame point cluster and the historical frame point cluster is smaller than a first distance threshold value and the coincidence degree of the mutual projection of the current frame point cluster and the historical frame point cluster is larger than a coincidence degree threshold value, the current frame point cluster and the historical frame point cluster are matched to the same point cluster class; and
and in the multi-frame fusion process, converting all the single frame points matched into any one point cluster in the single frame point cluster matching process into a global Cartesian coordinate system, rasterizing the periphery of the global Cartesian coordinate system to obtain a temporary map, and if the total occurrence frequency of the single frame points of the continuous multi-frame environment image which are already present in a grid of the temporary map is greater than the threshold value of the occurrence frequency of the single frame points, storing the information of the grid into a stable map.
2. The static obstacle grounded contour line multi-frame fusion method according to claim 1, wherein the information of the grid comprises: global cartesian coordinate information of the grid, a statistical average location of the single frame points occurring in the grid, and the total number of occurrences of the single frame points occurring in the grid.
3. The static obstacle grounded contour line multi-frame fusion method according to claim 2,
the statistical average position is the geometric center average position of the single frame point in the corresponding grid.
4. The static obstacle grounded contour line multiframe fusion method of claim 1, wherein the multiframe fusion process further comprises:
and a map updating process, namely judging whether the proportion of all the corresponding single frame points acquired in the current frame acquisition process falling into any grid stored in the stable map is greater than a hit rate threshold value or not, if so, adding all the single frame points corresponding to the current frame image into the stable map, otherwise, only updating the information of the corresponding grid of the temporary map in which the corresponding single frame points appear.
5. The static obstacle grounded contour line multi-frame fusion method according to claim 1,
the distance between the current frame point cluster and the historical frame point cluster is the horizontal coordinate distance between the current frame point cluster and the historical frame point cluster in a Cartesian coordinate system; and/or
The coincidence degree of the mutual projection of the current frame point cluster and the historical frame point cluster is the coincidence degree between horizontal coordinate ranges covered by the current frame point cluster and the historical frame point cluster in a Cartesian coordinate system.
6. The static obstacle grounded contour line multi-frame fusion method according to claim 5,
the horizontal coordinate distance between the current frame point cluster and the historical frame point cluster in the cartesian coordinate system is the distance between the midpoint of the two end point connecting lines which are farthest away from each other in the projection points of the single frame point in the current frame point cluster on the horizontal coordinate plane and the midpoint of the two end point connecting lines which are farthest away from each other in the projection points of the single frame point in the historical frame point cluster on the horizontal coordinate plane.
7. The static obstacle grounded contour line multi-frame fusion method according to claim 1, wherein the single-frame point clustering process further comprises:
and a single frame point cluster removing process, wherein the single frame point clusters containing the number of the single frame points smaller than a single frame point number threshold value are removed.
8. The static obstacle grounded contour line multi-frame fusion method according to claim 1, wherein the single frame point acquisition process further comprises:
and in the distance smoothing process, dividing the single frame points into a plurality of adjacent groups at equal intervals according to corresponding angle coordinates, and performing weighted average on the distance coordinates of the single frame points in each group of single frame points by using a Gaussian algorithm.
9. The static obstacle grounded contour line multi-frame fusion method according to claim 8, wherein the single frame point acquisition process further comprises:
and a boundary gradient sharpening process, namely further carrying out weighted average on the distance coordinates of the plurality of single frame points in each group of single frame points processed in the distance smoothing process by using a sobel operator.
10. The static obstacle grounded contour line multiframe fusion method as claimed in claim 1, wherein said filtering, during said preprocessing, of points outside a predetermined geometric range from among said obstacle contour points comprises:
removing, among the obstacle contour points, points exceeding a view angle threshold of the camera, and removing points having a distance from the camera smaller than a second distance threshold or larger than a third distance threshold, wherein the second distance threshold is smaller than the third distance threshold.
11. A static obstacle ground contour line multiframe fusion device is characterized by comprising:
the system comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for extracting obstacle contour points aiming at each frame of environment images shot by cameras arranged around a vehicle, and filtering and discarding points which are positioned outside a preset geometric range in the obstacle contour points to obtain filtered obstacle contour points;
a single frame point obtaining module, configured to convert the filtered obstacle contour points corresponding to each frame of the environment image into a polar coordinate system using the center of the vehicle as an origin of the polar coordinate system, and reserve a point closest to the origin of the polar coordinate system among all the filtered obstacle contour points having the same angular coordinate as a single frame point;
the single-frame point clustering module is used for sorting all the single-frame points according to the magnitude sequence of the angular coordinate values, taking another single-frame point which is closest to each single-frame point, calculating an angular coordinate difference value and a relative distance difference value between the two single-frame points, and clustering the two single-frame points to the same single-frame point cluster if the angular coordinate difference value is smaller than an angular coordinate difference threshold value and the relative distance difference value is smaller than a relative distance difference threshold value;
the single frame point cluster matching module is used for traversing a current frame point cluster belonging to a current frame environment image and a historical frame point cluster belonging to an adjacent historical frame environment image in the single frame point clusters, and matching the current frame point cluster and the historical frame point cluster to a same point cluster class if the distance between the current frame point cluster and the historical frame point cluster is smaller than a first distance threshold value and the coincidence degree of the mutual projection of the current frame point cluster and the historical frame point cluster is larger than a coincidence degree threshold value; and
and the multi-frame fusion module is used for converting all the single-frame points matched into any one point cluster in the single-frame point cluster matching process into a global Cartesian coordinate system and rasterizing the periphery of the global Cartesian coordinate system to obtain a temporary map, and if the total occurrence frequency of the single-frame points of the continuous multi-frame environment image in one grid of the temporary map is greater than the threshold value of the occurrence frequency of the single-frame points, storing the information of the grid into a stable map.
12. A computer-readable storage medium having stored thereon computer instructions operative to perform the static obstacle grounded contour multi-frame fusion method of any one of claims 1-10.
13. A computer device, comprising:
a memory storing computer instructions; and
a processor operating the computer instructions to perform the static obstacle grounded-contour multi-frame fusion method of any of claims 1-10.
CN202110622705.7A 2021-06-04 2021-06-04 Static obstacle grounding contour line multi-frame fusion method, device and medium Pending CN115512316A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110622705.7A CN115512316A (en) 2021-06-04 2021-06-04 Static obstacle grounding contour line multi-frame fusion method, device and medium
PCT/CN2021/109538 WO2022252380A1 (en) 2021-06-04 2021-07-30 Multi-frame fusion method and apparatus for grounding contour line of stationary obstacle, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110622705.7A CN115512316A (en) 2021-06-04 2021-06-04 Static obstacle grounding contour line multi-frame fusion method, device and medium

Publications (1)

Publication Number Publication Date
CN115512316A true CN115512316A (en) 2022-12-23

Family

ID=84322748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110622705.7A Pending CN115512316A (en) 2021-06-04 2021-06-04 Static obstacle grounding contour line multi-frame fusion method, device and medium

Country Status (2)

Country Link
CN (1) CN115512316A (en)
WO (1) WO2022252380A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953764A (en) * 2023-03-13 2023-04-11 深圳魔视智能科技有限公司 Vehicle sentinel method, device, equipment and storage medium based on aerial view
CN117607897A (en) * 2023-11-13 2024-02-27 深圳市其域创新科技有限公司 Dynamic object removing method and related device based on light projection method

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563817B (en) * 2023-04-14 2024-02-20 禾多科技(北京)有限公司 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN116147567B (en) * 2023-04-20 2023-07-21 高唐县空间勘察规划有限公司 Homeland mapping method based on multi-metadata fusion
CN117315530B (en) * 2023-09-19 2024-07-12 天津大学 Instance matching method based on multi-frame information
CN117876412B (en) * 2024-03-12 2024-05-24 江西求是高等研究院 Three-dimensional reconstruction background separation method, system, readable storage medium and computer

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948635B (en) * 2017-12-21 2021-04-27 北京万集科技股份有限公司 Target identification method and device based on laser scanning
US11391844B2 (en) * 2018-12-19 2022-07-19 Fca Us Llc Detection and tracking of road-side pole-shaped static objects from LIDAR point cloud data
CN110889350B (en) * 2019-11-18 2023-05-23 四川西南交大铁路发展股份有限公司 Line obstacle monitoring and alarming system and method based on three-dimensional imaging
CN112417967B (en) * 2020-10-22 2021-12-14 腾讯科技(深圳)有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953764A (en) * 2023-03-13 2023-04-11 深圳魔视智能科技有限公司 Vehicle sentinel method, device, equipment and storage medium based on aerial view
CN117607897A (en) * 2023-11-13 2024-02-27 深圳市其域创新科技有限公司 Dynamic object removing method and related device based on light projection method

Also Published As

Publication number Publication date
WO2022252380A1 (en) 2022-12-08

Similar Documents

Publication Publication Date Title
CN115512316A (en) Static obstacle grounding contour line multi-frame fusion method, device and medium
US11443437B2 (en) Vibe-based three-dimensional sonar point cloud image segmentation method
CN109243289A (en) Underground garage parking stall extracting method and system in high-precision cartography
CN110598541B (en) Method and equipment for extracting road edge information
CN104677361B (en) A kind of method of comprehensive location
CN111898486B (en) Monitoring picture abnormality detection method, device and storage medium
CN111126116A (en) Unmanned ship river channel garbage identification method and system
CN114612616A (en) Mapping method and device, electronic equipment and storage medium
CN114519712A (en) Point cloud data processing method and device, terminal equipment and storage medium
CN114037966A (en) High-precision map feature extraction method, device, medium and electronic equipment
CN116863357A (en) Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method
CN110929598B (en) Unmanned aerial vehicle-mounted SAR image matching method based on contour features
Dahiya et al. Object oriented approach for building extraction from high resolution satellite images
CN114783181B (en) Traffic flow statistics method and device based on road side perception
CN116358528A (en) Map updating method, map updating device, self-mobile device and storage medium
CN115661394A (en) Method for constructing lane line map, computer device and storage medium
CN114283199B (en) Dynamic scene-oriented dotted line fusion semantic SLAM method
CN115797409A (en) 3D target object positioning method and device and electronic equipment
CN107808160B (en) Three-dimensional building extraction method and device
CN115294358A (en) Feature point extraction method and device, computer equipment and readable storage medium
CN114782496A (en) Object tracking method and device, storage medium and electronic device
CN111487956A (en) Robot obstacle avoidance method and robot
CN118628779B (en) Target identification method based on minimum circumscribed rectangle self-adaptive clustering
LU503531B1 (en) Building outline extraction method based on laser point cloud
CN117423082A (en) Road marking determining method, apparatus, device, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination