CN114494075A - Obstacle identification method based on three-dimensional point cloud, electronic device and storage medium - Google Patents

Obstacle identification method based on three-dimensional point cloud, electronic device and storage medium Download PDF

Info

Publication number
CN114494075A
CN114494075A CN202210131180.1A CN202210131180A CN114494075A CN 114494075 A CN114494075 A CN 114494075A CN 202210131180 A CN202210131180 A CN 202210131180A CN 114494075 A CN114494075 A CN 114494075A
Authority
CN
China
Prior art keywords
point cloud
obstacle
image
dimensional point
scanning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210131180.1A
Other languages
Chinese (zh)
Inventor
禹文扬
谢意
冯冲
蒋先尧
刘志勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lukaizhixing Technology Co ltd
Original Assignee
Beijing Lukaizhixing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lukaizhixing Technology Co ltd filed Critical Beijing Lukaizhixing Technology Co ltd
Priority to CN202210131180.1A priority Critical patent/CN114494075A/en
Publication of CN114494075A publication Critical patent/CN114494075A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention provides a three-dimensional point cloud-based obstacle identification method, electronic equipment and a storage medium. The obstacle identification method based on the three-dimensional point cloud comprises the following steps: acquiring a three-dimensional point cloud image of a field environment; generating an initial value of confidence of an imaging point of the three-dimensional point cloud image; distinguishing obstacle image areas and pseudo obstacle image areas in the three-dimensional point cloud image by calculating the space distance between each adjacent imaging point in the three-dimensional point cloud image and the size of an imaging target; and adjusting the confidence coefficient value of an imaging point in at least one of the obstacle image region and the pseudo obstacle image region and removing the point cloud of the pseudo obstacle from the three-dimensional point cloud image according to the adjusted confidence coefficient value.

Description

Obstacle identification method based on three-dimensional point cloud, electronic device and storage medium
Technical Field
The invention relates to obstacle identification based on three-dimensional point clouds. More particularly, the present invention provides a three-dimensional point cloud-based obstacle recognition method, an electronic device, a storage medium, and a computer program product.
Background
The lidar has the characteristics of being free from the influence of illumination and directly obtaining accurate three-dimensional information, so that the lidar is often used for making up the defects of a camera sensor. The laser radar is widely applied to sensing equipment in the fields of automobiles, robots and the like, can actively emit infrared light to a target, and calculates three-dimensional information such as the direction, the shape and the like of the target through the TOF (time of flight) or laser triangulation distance measurement principle after receiving reflected return light of the target. In outdoor application scenes, the change of weather conditions can cause the appearance of natural factors which affect the vision sensor, such as raised dust, smoke, rainwater and the like. According to the imaging principle of the laser radar, raindrops, dust particles and the like on a light beam path emitted by the radar can be directly reflected back to cause error identification. The three-dimensional data acquired by the lidar is commonly referred to as a point cloud. Because the three-dimensional point cloud data is space position coordinate information and does not contain conventional visual information such as color, brightness and the like, the pseudo-obstacle targets such as smoke, rain, dense fog and the like seen by three-dimensional sensors such as a laser radar and the like lack extra information reference to judge whether the pseudo-obstacle targets belong to real obstacle targets. In the prior art, the influence of the natural factors on the identification of the obstacle is often reduced by increasing the beam area, increasing the wavelength of the infrared light used, and the like.
Disclosure of Invention
The embodiment of the invention provides a three-dimensional point cloud-based obstacle identification method, which comprises the following steps:
acquiring a three-dimensional point cloud image of a field environment;
generating an initial value of confidence of an imaging point of the three-dimensional point cloud image;
distinguishing obstacle image areas and pseudo obstacle image areas in the three-dimensional point cloud image by calculating the space distance between each adjacent imaging point in the three-dimensional point cloud image and the size of an imaging target; and
adjusting a confidence value of an imaging point in at least one of the obstacle image region and a pseudo-obstacle image region and removing a point cloud of a pseudo-obstacle from the three-dimensional point cloud image according to the adjusted confidence value.
In some embodiments, the distinguishing the obstacle image area and the pseudo-obstacle image area in the three-dimensional point cloud image by calculating a spatial distance between each adjacent imaging point in the three-dimensional point cloud image and a size of an imaging target comprises:
calculating the space distance between every two adjacent imaging points in the three-dimensional point cloud image; and
and comparing the space distance with a distance threshold value, and if the space distance is larger than the distance threshold value, drawing the corresponding imaging point into a false obstacle image area.
In some embodiments, the distinguishing between an obstacle image region and a false obstacle image region in the three-dimensional point cloud image by calculating a spatial separation between each adjacent imaging point in the three-dimensional point cloud image and a size of an imaging target further comprises:
calculating a size of an imaging target if the spatial separation is less than a separation threshold; and
and if the size of the imaging target is larger than a size threshold value, the imaging target is scratched into an obstacle image area.
In some embodiments, the distinguishing of the obstacle image area and the pseudo-obstacle image area in the three-dimensional point cloud image by calculating the spatial separation between each adjacent imaging point in the three-dimensional point cloud image and the size of the imaging target comprises:
and performing line-by-line and/or column-by-column scanning on the three-dimensional point cloud image to calculate the space interval between each adjacent imaging point in the three-dimensional point cloud image and the size of an imaging target.
In some embodiments, the scanning the three-dimensional point cloud image line by line and/or column by column to calculate the spatial spacing between adjacent imaging points in the three-dimensional point cloud image and the size of the imaging target comprises:
(a) selecting a row or a column to be scanned in the three-dimensional point cloud image, setting a counter to zero and setting i to be 0;
(b) calculating the center distance of the ith scanning point in the row or the column in the three-dimensional point cloud image;
(c) calculating the center distance of the (i + 1) th scanning point when the center distance of the ith scanning point is not zero, setting the counter to be zero when the center distance of the ith scanning point is zero, accumulating the i by 1 and returning to the step (b);
(d) calculating the space distance between the i +1 th scanning point and the ith scanning point under the condition that the center distance of the i +1 th scanning point is not zero, and setting the counter to be zero, accumulating i by 2 and returning to the step (b) under the condition that the center distance of the i +1 th scanning point is zero; and
(e) setting the counter to zero, accumulating i by 1 and returning to the step (b) under the condition that the space distance between the (i + 1) th scanning point and the ith scanning point is larger than the distance threshold, otherwise accumulating the counter by 1, accumulating i by 1 and returning to the step (b),
before the counter is ready to be set to zero, the value of the counter is compared with a size threshold value, and when the value of the counter is larger than the size threshold value, the scanning points from the ith scanning point to the ith-C scanning point belong to an obstacle image area, wherein C is the value of the counter.
In some embodiments, the adjusting a value of confidence of an imaged point in at least one of the obstacle image region and a pseudo-obstacle image region and identifying an obstacle from the three-dimensional point cloud image according to the adjusted value of confidence comprises:
(g) and increasing the confidence coefficient values of the ith to ith-C scanning points under the condition that the ith to ith-C scanning points belong to the obstacle image area.
In some embodiments, the adjusting the value of confidence of imaged points in at least one of the obstacle image region and a pseudo-obstacle image region and removing a point cloud of pseudo-obstacles from the three-dimensional point cloud image according to the adjusted value of confidence further comprises:
and comparing the confidence coefficient value of each imaging point in the three-dimensional point cloud image with a confidence coefficient threshold value, and regarding the imaging point with the confidence coefficient value lower than the confidence coefficient threshold value as a pseudo obstacle so as to remove the pseudo obstacle from the three-dimensional point cloud image.
In some embodiments, the method is for identifying obstacles on an outdoor non-hardened roadway, the method further comprising:
removing the ground point cloud in the three-dimensional point cloud image before generating an initial value of confidence of an imaging point of the three-dimensional point cloud image.
An embodiment of the present invention further provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of any of the embodiments described above.
Embodiments of the present invention also provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any one of the above embodiments.
Embodiments of the present invention also provide a computer program product comprising a computer program that, when executed by a processor, implements the method of any of the above embodiments.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the invention. Wherein:
FIG. 1 shows a schematic diagram of a work scenario in which a lidar generates a three-dimensional point cloud to identify an obstacle;
FIG. 2A is a schematic flow diagram of a three-dimensional point cloud based obstacle identification method according to one embodiment of the present invention;
fig. 2B is a schematic flowchart of a specific example of step S30 in the three-dimensional point cloud-based obstacle identification method shown in fig. 2A;
fig. 3 is a schematic flow diagram of line-by-line and/or column-by-column scanning of the three-dimensional point cloud image in a three-dimensional point cloud based obstacle identification method according to another embodiment of the present invention;
FIG. 4A is a schematic diagram of an original three-dimensional point cloud image generated by a lidar;
FIG. 4B is a schematic diagram of a three-dimensional point cloud image obtained after processing by a method according to an embodiment of the invention;
FIG. 5 is a block diagram of an electronic device for implementing a three-dimensional point cloud based obstacle identification method according to one embodiment of the present invention;
FIG. 6A is a schematic of a pixel arrangement of a three-dimensional point cloud image in two dimensions; and
fig. 6B is a top view of the three-dimensional point cloud space scanned by the lidar.
Detailed Description
To more clearly illustrate the objects, technical solutions and advantages of the present invention, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the following description of the embodiments is intended to illustrate and explain the present general inventive concept and should not be taken as limiting the present invention. In the specification and drawings, the same or similar reference numerals refer to the same or similar parts or components. The figures are not necessarily to scale and certain well-known components and structures may be omitted from the figures for clarity.
Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The use of "first," "second," and similar terms in the present application do not denote any order, quantity, or importance, but rather the terms are used to distinguish one element from another. The word "a" or "an" does not exclude a plurality. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", "top" or "bottom", etc. are used merely to indicate relative positional relationships, which may change when the absolute position of the object being described changes.
A three-dimensional point cloud image (which may be generated by a laser radar or a depth sensor, for example) may be used to represent the positional relationship of various objects or elements in space, which may in turn assist a user in identifying targets in the surrounding environment. For example, a lidar may be disposed on a motor vehicle to scan the vehicle's surroundings to acquire a three-dimensional point cloud image from which various targets in the vehicle's surroundings (e.g., road environment) are identified. An example is shown in fig. 1. In fig. 1, a laser radar 10 is disposed on a vehicle 11 and emits a laser beam 12 to image a field environment within a certain viewing angle (e.g., front or side of the vehicle, rear, etc.) so as to identify a target object 20 in the field environment.
As used herein, "field environment" may generally refer to various desired operating environments such as roads, mines, fields, and the like. In a real scene, the objects 20 in the surroundings of the vehicle may be relatively complex. For example, it may include some physical obstacles such as rocks, cars, engineering facilities, and the like that the vehicle needs to avoid, and may also include false obstacles that raise sand, rain mist, smoke, and the like, which do not actually hinder the vehicle from traveling. Therefore, it is desirable to quickly and accurately identify the obstacle based on the three-dimensional point cloud image, thereby ensuring driving safety. Here, the "pseudo obstacle" refers to an object that does not actually constitute a substantial obstacle (e.g., has a collision risk) to the travel of a person, a vehicle, or the like in the three-dimensional point cloud image. It is also an object of the present invention to provide a method capable of removing the influence of a false obstacle such as raised sand, rain fog, smoke, and the like from a three-dimensional point cloud image to correctly identify the obstacle.
The phenomena of rain, fog and smoke dust are relatively close in visual characteristics in three-dimensional point cloud imaging of the laser radar, and dust (sand and dust) on outdoor non-hardened roads is taken as an example for explanation.
The method is characterized in that the raised dust has physical characteristics with larger dust particle spacing, so that semitransparent visual characteristics of partial reflection and partial penetration can be presented after the raised dust is irradiated by light beams, the characteristic that reflected light on the surface of the raised dust is captured by a radar and imaged to present sparse three-dimensional point cloud density is obviously different from the characteristic of three-dimensional point cloud imaging on the surface of a conventional obstacle, and the basic principle of the method is to distinguish solid obstacle point cloud and raised dust pseudo obstacle point cloud according to the characteristic of pseudo obstacle point cloud represented by the raised dust.
Fig. 4A shows one example of an original three-dimensional point cloud image generated by a lidar. The three-dimensional point cloud image includes a flying dust point cloud 41, an obstacle vehicle body point cloud 42, and a ground point cloud 43. In this example, it can be clearly seen that the spatial separation between adjacent imaged points in the cloud of airborne dust is large and the distribution is relatively diffuse.
The invention provides a barrier identification method based on three-dimensional point cloud. In some embodiments, as illustrated in fig. 2A, the method for identifying an obstacle based on a three-dimensional point cloud includes:
step S10: acquiring a three-dimensional point cloud image of a field environment;
step S20: generating an initial value of confidence of an imaging point of the three-dimensional point cloud image;
step S30: distinguishing obstacle image areas and pseudo obstacle image areas in the three-dimensional point cloud image by calculating the space distance between each adjacent imaging point in the three-dimensional point cloud image and the size of an imaging target; and
step S40: adjusting a confidence value of an imaging point in at least one of the obstacle image region and a pseudo-obstacle image region and removing a point cloud of a pseudo-obstacle from the three-dimensional point cloud image according to the adjusted confidence value.
In the embodiment of the invention, the three-dimensional point cloud image is a two-dimensional image, but each pixel point in the three-dimensional point cloud image corresponds to a three-dimensional space coordinate point. General camera pictureThe information recorded by each pixel point of the image is RGB color information and light and shade gray information of the point target, and the information recorded by each pixel point in the three-dimensional point cloud image is three-dimensional space coordinate information of the point target. Or, each pixel point in the three-dimensional point cloud image in the form of a two-dimensional image has a corresponding relationship with a certain point in a three-dimensional space. For example, fig. 6A shows one pixel PA in a certain row of pixels in a three-dimensional point cloud image in the form of a two-dimensional image, which pixel PA corresponds to a point SA (see fig. 6B) in three-dimensional space, i.e. in the system, the spatial coordinate (x) of SASA,ySA,zSA) And the pixel PA is set to have a mapping relation. Likewise, the pixel PB in the two-dimensional image shown in fig. 6A also has a mapping relationship with the point SB in the three-dimensional space shown in fig. 6B. The spatial interval between imaging points (or scanning points) described herein refers to the distance between points in three-dimensional space, for example, the spatial distance between points SA and SB in fig. 6B. In actual lidar scanning, a multi-line lidar is often employed. Such a multi-line radar comprises a plurality of sets of transmitters and receivers, each set of transmitters emitting a laser beam called a "line". A row of pixels in the three-dimensional point cloud image represents the image scanned by a laser beam ("line"). Different lines typically correspond to different ranges of pitch angles. The number of rows of pixels of the three-dimensional point cloud image generally corresponds to the number of lines of the lidar. By way of example, the lidar may be a 360 degree ring scan radar or a fixed direction radar. To better see the spatial position relationship between the radar center of the lidar and the point cloud, fig. 6B takes the form of a top view. In the example shown in FIG. 6B, the x-axis and y-axis lie in the plane of the paper in the top view, and the z-axis is perpendicular to the plane of the paper. For convenience of representation, the radar center O may be located as a spatial origin of coordinates. In the case where only single line scanning is considered, the distance from the spatial point SA to the radar center O (may be referred to as "center-to-center distance") in fig. 6B may be approximately defined, assuming that the pitch angle range occupied in the vertical direction (z direction) is small
Figure BDA0003503076800000071
The "spatial interval between adjacent imaged points in the three-dimensional point cloud image" is not the interval between adjacent imaged points in the two-dimensional image, but the spatial interval between corresponding points of the imaged points in the three-dimensional space.
It should be noted that, in the above example, for the purpose of simplifying the algorithm, it is assumed that the SA point and the radar center O are in the same plane. Embodiments of the invention are not limited in this regard, however, for example, the SA point SB point may not lie in the same plane as the radar center O.
The "confidence" referred to herein is a parameter for measuring the reliability of an imaging point as a desired imaging point. For example, in the above method, the purpose is achieved to find a real obstacle point cloud from a three-dimensional point cloud image, while excluding a point cloud of a pseudo obstacle. Then, the confidence can be used to measure the confidence that the imaging point corresponding to the confidence belongs to the real obstacle point cloud. For example, the confidence may be defined as: the higher the numerical value of the confidence coefficient is, the higher the confidence degree that the imaging point corresponding to the numerical value belongs to the real obstacle point cloud is. Therefore, when calculating and judging the obstacle point cloud in the three-dimensional point cloud image, the confidence coefficient can be used as an index, and when the confidence coefficient of a certain imaging point is greater than a certain confidence coefficient threshold value, the imaging point can be considered to belong to the real obstacle point cloud. In the embodiment of the invention, the value range and the value type of the confidence coefficient are not limited as long as the confidence coefficient that the imaging point belongs to the real obstacle point cloud can be represented. The imaging point of the three-dimensional point cloud image is different from the pixel point of the three-dimensional point cloud image. Taking fig. 4A as an example, it can be seen that not all the pixels in the three-dimensional point cloud image shown in fig. 4A form a point cloud (the point cloud is represented by a black dot or a line in fig. 4A). Therefore, herein, the imaged point of the three-dimensional point cloud image refers to an imaged point in the point cloud of the three-dimensional point cloud image. For the pixel points without point cloud formation, no further consideration is needed in the process of eliminating the false obstacle point cloud, and the confidence coefficient of the pixel points without point cloud formation is also not needed to be considered.
In step S10, a three-dimensional point cloud image may be generated by a radar, a depth sensor, or the like (e.g., may be provided on a vehicle), for example. In step S20, the initial value of the confidence of each imaged point of the three-dimensional point cloud image may be considered as an estimate of the confidence level of the imaged point in the three-dimensional point cloud image, for example, provided by a lidar. As an example, the initial value of the confidence may be determined from the intensity of the reflected light from the object during imaging of the imaged spot. For example, the initial value of the confidence coefficient may be defined to be positively correlated (e.g., proportional) to the reflected light intensity, in this case, an imaging point with a very low initial value of the confidence coefficient may also be directly removed as an interference signal, and a pixel point without forming a point cloud in the initial three-dimensional point cloud image may also be regarded as a pixel point with a very low initial value of the confidence coefficient so as to be disregarded when determining whether the pixel point is a real obstacle point cloud. Accordingly, in the subsequent processing, the value of the confidence degree and the confidence degree of the point cloud of which the imaging point belongs to the real obstacle are in positive correlation.
As mentioned previously, false obstacles such as airborne dust, smoke, etc. exhibit the phenomenon of very discrete imaging point clouds, because such false obstacles are formed by loose particle agglomerates. When a radar scanning beam impinges on such a false obstacle, a portion of the beam may pass through the false obstacle and another portion of the beam may be reflected back. The three-dimensional point cloud of such pseudo-obstacles is embodied in that the spacing of adjacent points is large and irregular. The explanation is that the surface concentration of pseudo-obstacles such as raised dust, smoke and the like cannot be completely uniform, the particle density of some places is high, the laser reflection is strong, and the point cloud imaging position is close to the surface; some local particles are sparse in density, the laser reflection is weak, the point cloud imaging position is far away from the surface, and the surface shapes of pseudo-obstacles such as dust, smoke and the like are irregular, so that the discreteness and the irregularity of the imaging point cloud are increased. In contrast, the surface of a real obstacle (such as a vehicle, a rock, and the like) has continuous point cloud imaging due to its compactness, even though the point cloud imaging is fluctuated on the surface of a rough, hollow and concave obstacle, the point cloud imaging is continuous, and the point distance is small. This spacing refers to the spatial spacing between adjacent imaged points in the point cloud image and not the spatial spacing of dust particles. The large and irregular spacing of the imaging points of the pseudo-obstruction is caused by its discrete, unstable distribution. In some embodiments, the lidar need not image every dust particle of fugitive dust, smoke, but rather merely image the particle agglomeration.
As can be seen from the above analysis, the point cloud distribution of the pseudo-obstacle is dispersed and has a large spatial interval, while the point cloud distribution of the real obstacle is continuous and has a small spatial interval. Thus, in step S30, the obstacle image area and the pseudo-obstacle image area in the three-dimensional point cloud image may be distinguished by calculating the spatial distance between each adjacent imaging point in the three-dimensional point cloud image and the size of the imaging target. For example, for a point cloud with a small spatial separation between adjacent imaging points and a large target size, the point cloud will fall into an obstacle image area, while other point clouds will fall into a false obstacle image area.
In order to remove the point cloud of the pseudo-obstacle from the three-dimensional point cloud image, the confidence values of the obstacle image area and the pseudo-obstacle image area may be adjusted differently. For example, in the case where the initial value of the confidence is defined to be positively correlated with the reflected light intensity, the confidence value of the imaging point in the obstacle image region may be increased or the confidence value of the imaging point in the false obstacle image region may be decreased and finally the imaging point whose confidence is smaller than a certain threshold value may be removed from the three-dimensional point cloud image to remove the false obstacle. In the step S40, the confidence value of the imaging point in the obstacle image region may be adjusted, the confidence value of the imaging point in the pseudo obstacle image region may be adjusted, or the difference between the confidence values of the imaging points in both the obstacle image region and the pseudo obstacle image region may be adjusted, so as to remove the point cloud of the pseudo obstacle from the three-dimensional point cloud image, so as to correctly identify the real obstacle.
In some embodiments, as shown in fig. 2B, the step S30 may include:
step S31: calculating the space distance between every two adjacent imaging points in the three-dimensional point cloud image; and
step S32: and comparing the space distance with a distance threshold value, and if the space distance is larger than the distance threshold value, drawing the corresponding imaging point into a false obstacle image area.
The spacing threshold may be represented by Dt, for example, which measures how stable the target point cloud is located. The spacing threshold may be determined according to the situation of a false obstacle in a specific field environment, and is not limited herein.
As an example, step 30 may further comprise:
step S33: calculating a size of an imaging target if the spatial separation is less than a separation threshold; and
step S34: and if the size of the imaging target is larger than a size threshold value, the imaging target is scratched into an obstacle image area.
Here, the size of the imaged object may be measured, for example, by the number of consecutive imaged points contained in the point cloud of the object. For example, it may be examined whether the spatial pitch of the next imaging point still satisfies the condition of being smaller than the pitch threshold in the case of calculating that the spatial pitch of the adjacent imaging points is smaller than the pitch threshold. If there are a plurality of imaging points in succession and the spatial separation between all adjacent imaging points is less than the separation threshold, then the sum of the number of imaging points can be considered the size of the imaging target. The size threshold may be set according to actual conditions (e.g., imaging distance, scene type in the environment, etc.), and embodiments of the present invention are not limited thereto.
As described above, the three-dimensional point cloud image described in the present invention is a two-dimensional image. The two-dimensional image may be formed by an N x M pixel lattice, i.e. it comprises N (e.g. N is 64) rows and M (e.g. M is 1024) columns of pixels (e.g. a lidar which may correspond to N lines, M resolution). Each pixel point in the two-dimensional image corresponds to a three-dimensional coordinate value of an imaging point. That is to say, the three-dimensional point cloud data is stored in a two-dimensional image structure, and the adjacent spatial relationship of the two-dimensional image pixel points and the adjacent spatial relationship of the three-dimensional point cloud have strong correlation.
In some embodiments, the spatial separation between adjacent imaging points and the size of the imaging target in the three-dimensional point cloud image may be calculated by scanning the three-dimensional point cloud image row-by-row and/or column-by-column. That is, step S30 may include:
step S35: and performing line-by-line and/or column-by-column scanning on the three-dimensional point cloud image to calculate the space interval between each adjacent imaging point in the three-dimensional point cloud image and the size of an imaging target.
Fig. 3 shows an example of the scan calculation performed for a certain row or a certain column in step S35. In this example, step S35 includes:
step S35 a: selecting a row or a column to be scanned in the three-dimensional point cloud image, setting a counter to zero and setting i to be 0;
step S35 b: calculating the center distance of the ith scanning point in the row or the column in the three-dimensional point cloud image;
step S35 c: calculating the center distance of the i +1 th scanning point when the center distance of the i-th scanning point is not zero, setting the counter to zero when the center distance of the i-th scanning point is zero, adding 1 to i, and returning to step S35 b;
step S35 d: calculating a spatial distance between the i +1 th scanning point and the ith scanning point in a case where the center distance of the i +1 th scanning point is not zero, and setting the counter to zero, incrementing i by 2 and returning to step S35b in a case where the center distance of the i +1 th scanning point is zero; and
step S35 e: setting the counter to zero, accumulating i by 1 and returning to the step S35b if the spatial distance between the (i + 1) th scanning point and the ith scanning point is larger than the distance threshold, and otherwise accumulating the counter by 1, accumulating i by 1 and returning to the step S35 b;
in which the value of the counter is compared with a size threshold before it is ready to be set to zero (see also the position indicated by "x" in fig. 3), and it is determined that the scanning points from the i-th scanning point to the i-th to C-th scanning point belong to the obstacle image area in the case where the value of the counter is greater than the size threshold, where C is the value of the counter.
In step S35b, the center-to-center distance of the i-th scanning point is the distance from the scanning point to the radar center. For example, for convenience of calculation, for scanning points in the same row or column in the three-dimensional point cloud image in the form of a two-dimensional image, the scanning points may be approximately regarded as being substantially in the same plane after being mapped in the three-dimensional space, the radar center may be defined as a coordinate origin, and the center distance of the ith scanning point may be defined as the center distance of the ith scanning point
Figure BDA0003503076800000111
Wherein x isiAnd yiRespectively, the x-coordinate and the y-coordinate of the ith scanning point. When the center distance of the ith scanning point is not 0, the center distance of the (i + 1) th scanning point is calculated, and when the center distance of the ith scanning point or the (i + 1) th scanning point is 0, the point is skipped to directly accumulate the i for processing the subsequent scanning points. Under the condition that the ith scanning point and the (i + 1) th scanning point are not 0, the distance between the ith scanning point and the (i + 1) th scanning point can be calculated. For example, the spatial separation between the ith scanning spot and the (i + 1) th scanning spot can be defined as
Figure BDA0003503076800000112
Wherein x isi+1And yi+1Respectively, the x coordinate and the y coordinate of the i +1 th scanning point. When the space distance D between the ith scanning point and the (i + 1) th scanning pointi_i+1Greater than a certain spacing threshold DthThen, the ith scanning point may be excluded from the obstacle image area. And when the space distance D between the ith scanning point and the (i + 1) th scanning pointi_i+1Less than (or equal to) a certain spacing threshold DthIn this case, it is necessary to consider whether the spatial distance between the subsequent C consecutive scanning points and the immediately preceding adjacent scanning point also satisfies the requirement of the distance threshold. If the distance between the C consecutive scanning points and the adjacent scanning points can satisfy the requirement and C is larger than the size threshold value CthThen these consecutive scan points (the ith through ith-C scan points) will all be classified into the obstacle image area. Here, the meterThe value of the counter C characterizes the size of the target.
It should be understood that in the above calculations regarding the spatial distance and the center distance, the scanning points are placed in the three-dimensional point cloud space to be calculated, not in the two-dimensional image. When the center distance of a certain scanning point is zero, it means that it is located at the center of the radar, and such a point obviously does not constitute a real obstacle and therefore needs to be removed.
After determining the point clouds that fall within the obstacle image region, the confidence of the point clouds may be adjusted. In some embodiments, the step S40 may further include:
step S41: and increasing the confidence coefficient values of the ith to ith-C scanning points under the condition that the ith to ith-C scanning points belong to the obstacle image area.
After this step S35b, the point cloud in the obstacle image area will obtain a higher confidence value, so as to be distinguished from the point cloud outside the obstacle image area for the purpose of removing the false obstacle according to the confidence value. The scanning point represents an imaging point in a scanned row or column.
In some embodiments, the column-by-column scanning may be performed after the row-by-row scanning is performed, or the row-by-row scanning may be performed after the column-by-column scanning is performed. The confidence of the point cloud in the obstacle image area after scanning in two directions respectively is larger than that of other areas, and the dividing of the obstacle image area is more accurate. While the above-mentioned pitch threshold and size threshold may need to be set to different values for progressive scanning and column-by-column scanning, respectively. The setting of the spacing threshold and the size threshold is also related to factors such as the scanning resolution of the laser radar, the distance between a scanning target and the radar, and the like. For example, a size threshold may be defined to correspond to a dimension of 30 centimeters. How many scanning points the size threshold crosses depends on the resolution of the scan (the length or angle corresponding to adjacent pixels in a row in the two-dimensional image). As an example, the lidar may have a transverse axis resolution of 1024 or more and an angular resolution of 0.35 degrees or more.
To remove the false obstacle from the three-dimensional point cloud image, the confidence threshold CON may be setthTo distinguish between false obstacles and point clouds of obstacles. In some embodiments, the step S40 may further include:
step S42: and comparing the confidence coefficient value of each imaging point in the three-dimensional point cloud image with a confidence coefficient threshold value, and regarding the imaging point with the confidence coefficient value lower than the confidence coefficient threshold value as a pseudo obstacle so as to remove the pseudo obstacle from the three-dimensional point cloud image.
For example, the confidence value of each imaged point in the three-dimensional point cloud image may be compared with a confidence threshold one by one, and when the confidence value of an imaged point is less than (or equal to) the confidence threshold, the imaged point is removed from the three-dimensional point cloud image.
In the actual three-dimensional point cloud image, besides the pseudo obstacles such as the raised sand and the smoke discussed above, the ground point cloud may be included. In some embodiments, it is also desirable to remove the ground point cloud from the three-dimensional point cloud image as well. For example, as shown by a dotted line box in fig. 2A, the obstacle identification method according to some embodiments of the present invention may further include step S50:
removing the ground point cloud in the three-dimensional point cloud image before generating an initial value of confidence of an imaging point of the three-dimensional point cloud image.
As an example, a ground point cloud may be removed as a background. As can be seen from fig. 4A, the shape of the ground point cloud is very regular, so the point cloud (i.e., the ground point cloud) matching the ground point cloud can be found and removed from the three-dimensional point cloud image by using the prior knowledge of the shape of the ground point cloud (e.g., uniform flat stripes). For example, the ground point cloud may be removed by a method such as random consensus sampling segmentation (RANSAC).
The obstacle identification method based on the three-dimensional point cloud can be used for removing the influence of false obstacles such as raised dust and smoke in various scenes on real obstacle identification, and is particularly suitable for identifying the obstacles on outdoor non-hardened roads.
Fig. 4B is a schematic diagram of a three-dimensional point cloud image obtained after being processed by the method according to the embodiment of the invention. By comparing fig. 4A and 4B, it can be seen that the false obstacles such as fugitive dust, smoke, etc. have been removed, leaving the vehicle body as a real obstacle (i.e., the obstacle vehicle body point cloud 42 in the figure).
In an embodiment of the present invention, the spatial separation of the imaged points is calculated and an independent confidence value is set for each imaged point, so that the removal of the false obstacle can be actually performed based on the imaged points rather than on the point cloud cluster consisting of a plurality of imaged points. This is particularly advantageous for the case where dust, smoke, and real obstacles stick together in an actual scene, which can avoid erroneous deletion of imaging points of real obstacles due to the fact that imaging points of a part of real obstacles and imaging points of pseudo obstacles are drawn as the same point cloud cluster.
In addition, the method according to the embodiment of the invention fully utilizes the mapping relation between the three-dimensional point cloud image in the two-dimensional form and the three-dimensional space data to calculate, and can greatly improve the processing efficiency compared with the method of directly calculating the three-dimensional space data.
The invention also provides an electronic device, a readable storage medium and a computer program product according to the embodiments of the invention. In some embodiments, the electronic device includes at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method for identifying an obstacle based on a three-dimensional point cloud according to any of the above embodiments. In some embodiments, the readable storage medium may be a non-transitory computer readable storage medium storing computer instructions for causing the computer to execute the method for identifying an obstacle based on a three-dimensional point cloud according to any one of the above embodiments. In some embodiments, the computer program product comprises a computer program which, when executed by a processor, implements the method for three-dimensional point cloud based obstacle identification according to any of the above embodiments.
FIG. 5 shows a schematic block diagram of an example electronic device 1100 that may be used to implement embodiments of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the electronic device 1100 comprises a computing unit 1101, which may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)1102 or a computer program loaded from a storage unit 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data necessary for the operation of the device 1100 may also be stored. The calculation unit 1101, the ROM 1102, and the RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
A number of components in device 1100 connect to I/O interface 1105, including: an input unit 1106 such as a keyboard, a mouse, and the like; an output unit 1107 such as various types of displays, speakers, and the like; a storage unit 1108 such as a magnetic disk, optical disk, or the like; and a communication unit 1109 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1109 allows the device 1100 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 1101 can be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1101 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 1101 performs the respective methods and processes described above, such as a method of predicting traffic flow. For example, in some embodiments, the method of predicting traffic flow may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 1108. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1100 via ROM 1102 and/or communication unit 1109. When the computer program is loaded into the RAM 1103 and executed by the computing unit 1101, one or more steps of the method of predicting traffic flow described above may be performed. Alternatively, in other embodiments, the computing unit 1101 may be configured by any other suitable means (e.g., by means of firmware) to perform the method of predicting traffic flow.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present invention may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in this disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed herein can be achieved, and the present disclosure is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A method for identifying obstacles based on three-dimensional point cloud comprises the following steps:
acquiring a three-dimensional point cloud image of a field environment;
generating an initial value of confidence of an imaging point of the three-dimensional point cloud image;
distinguishing obstacle image areas and pseudo obstacle image areas in the three-dimensional point cloud image by calculating the space distance between each adjacent imaging point in the three-dimensional point cloud image and the size of an imaging target; and
adjusting a confidence value of an imaging point in at least one of the obstacle image region and a pseudo-obstacle image region and removing a point cloud of a pseudo-obstacle from the three-dimensional point cloud image according to the adjusted confidence value.
2. The method of claim 1, wherein the distinguishing of obstacle image regions and false obstacle image regions in the three-dimensional point cloud image by calculating spatial separation between adjacent imaged points in the three-dimensional point cloud image and a size of an imaging target comprises:
calculating the space distance between every two adjacent imaging points in the three-dimensional point cloud image; and
and comparing the space distance with a distance threshold value, and if the space distance is larger than the distance threshold value, drawing the corresponding imaging point into a false obstacle image area.
3. The method of claim 2, wherein the distinguishing of obstacle image regions and false obstacle image regions in the three-dimensional point cloud image by calculating spatial separation between adjacent imaged points in the three-dimensional point cloud image and a size of an imaging target further comprises:
calculating a size of an imaging target if the spatial separation is less than a separation threshold; and
and if the size of the imaging target is larger than a size threshold value, the imaging target is scratched into an obstacle image area.
4. The method of claim 1, wherein the distinguishing of obstacle image regions and false obstacle image regions in the three-dimensional point cloud image by calculating spatial separation between adjacent imaged points in the three-dimensional point cloud image and a size of an imaging target comprises:
and performing line-by-line and/or column-by-column scanning on the three-dimensional point cloud image to calculate the space interval between each adjacent imaging point in the three-dimensional point cloud image and the size of an imaging target.
5. The method of claim 4, wherein the line-by-line and/or column-by-column scanning the three-dimensional point cloud image to calculate the spatial separation between adjacent imaged points in the three-dimensional point cloud image and the size of an imaging target comprises the steps of:
(a) selecting a row or a column to be scanned in the three-dimensional point cloud image, setting a counter to zero and setting i to be 0;
(b) calculating the center distance of the ith scanning point in the row or the column in the three-dimensional point cloud image;
(c) calculating the center distance of the (i + 1) th scanning point when the center distance of the ith scanning point is not zero, setting the counter to be zero when the center distance of the ith scanning point is zero, accumulating the i by 1 and returning to the step (b);
(d) calculating the space distance between the i +1 th scanning point and the ith scanning point under the condition that the center distance of the i +1 th scanning point is not zero, and setting the counter to be zero, accumulating i by 2 and returning to the step (b) under the condition that the center distance of the i +1 th scanning point is zero; and
(e) setting the counter to zero, accumulating i by 1 and returning to the step (b) under the condition that the space distance between the (i + 1) th scanning point and the ith scanning point is larger than the distance threshold, otherwise accumulating the counter by 1, accumulating i by 1 and returning to the step (b),
before the counter is ready to be set to zero, the value of the counter is compared with a size threshold value, and when the value of the counter is larger than the size threshold value, the scanning points from the ith scanning point to the ith-C scanning point belong to an obstacle image area, wherein C is the value of the counter.
6. The method of claim 5, wherein the adjusting a value of confidence of an imaged point in at least one of the obstacle image region and a pseudo-obstacle image region and identifying an obstacle from the three-dimensional point cloud image according to the adjusted value of confidence comprises:
(g) and increasing the confidence coefficient values of the ith to ith-C scanning points under the condition that the ith to ith-C scanning points belong to the obstacle image area.
7. The method of any of claims 1 to 6, wherein the adjusting the value of confidence of imaged points in at least one of the obstacle image region and a false obstacle image region and removing a point cloud of false obstacles from the three-dimensional point cloud image according to the adjusted value of confidence further comprises:
and comparing the confidence coefficient value of each imaging point in the three-dimensional point cloud image with a confidence coefficient threshold value, and regarding the imaging point with the confidence coefficient value lower than the confidence coefficient threshold value as a pseudo obstacle so as to remove the pseudo obstacle from the three-dimensional point cloud image.
8. The method of any one of claims 1 to 6, wherein the method is used for identifying obstacles on outdoor non-hardened roads, the method further comprising:
removing the ground point cloud in the three-dimensional point cloud image before generating an initial value of confidence of an imaging point of the three-dimensional point cloud image.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
11. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-8.
CN202210131180.1A 2022-02-14 2022-02-14 Obstacle identification method based on three-dimensional point cloud, electronic device and storage medium Pending CN114494075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210131180.1A CN114494075A (en) 2022-02-14 2022-02-14 Obstacle identification method based on three-dimensional point cloud, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210131180.1A CN114494075A (en) 2022-02-14 2022-02-14 Obstacle identification method based on three-dimensional point cloud, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN114494075A true CN114494075A (en) 2022-05-13

Family

ID=81480090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210131180.1A Pending CN114494075A (en) 2022-02-14 2022-02-14 Obstacle identification method based on three-dimensional point cloud, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114494075A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984803A (en) * 2023-03-10 2023-04-18 安徽蔚来智驾科技有限公司 Data processing method, device, driving device, and medium
CN116605212A (en) * 2023-07-11 2023-08-18 北京集度科技有限公司 Vehicle control method, device, computer equipment and storage medium
WO2023244929A1 (en) * 2022-06-14 2023-12-21 Kodiak Robotics, Inc. Systems and methods for lidar atmospheric filtering background

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023244929A1 (en) * 2022-06-14 2023-12-21 Kodiak Robotics, Inc. Systems and methods for lidar atmospheric filtering background
CN115984803A (en) * 2023-03-10 2023-04-18 安徽蔚来智驾科技有限公司 Data processing method, device, driving device, and medium
CN115984803B (en) * 2023-03-10 2023-12-12 安徽蔚来智驾科技有限公司 Data processing method, device, driving device and medium
CN116605212A (en) * 2023-07-11 2023-08-18 北京集度科技有限公司 Vehicle control method, device, computer equipment and storage medium
CN116605212B (en) * 2023-07-11 2023-10-20 北京集度科技有限公司 Vehicle control method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN114494075A (en) Obstacle identification method based on three-dimensional point cloud, electronic device and storage medium
CN111046776B (en) Method for detecting obstacle of path of mobile robot based on depth camera
CN110443786B (en) Laser radar point cloud filtering method and device, computer equipment and storage medium
CN111192295B (en) Target detection and tracking method, apparatus, and computer-readable storage medium
EP4130798A1 (en) Target identification method and device
CN111563450B (en) Data processing method, device, equipment and storage medium
CN108109139B (en) Airborne LIDAR three-dimensional building detection method based on gray voxel model
CN109946703B (en) Sensor attitude adjusting method and device
US10748257B2 (en) Point cloud colorization with occlusion detection
CN110471086B (en) Radar fault detection system and method
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
US20150198735A1 (en) Method of Processing 3D Sensor Data to Provide Terrain Segmentation
WO2023179718A1 (en) Point cloud processing method and apparatus for lidar, and device and storage medium
WO2023179717A1 (en) Point cloud processing method and apparatus for laser radar, device, and storage medium
CN108074232B (en) Voxel segmentation-based airborne LIDAR building detection method
US11933884B2 (en) Radar image processing device, radar image processing method, and storage medium
CN115139303A (en) Grid well lid detection method, device, equipment and storage medium
CN117269940B (en) Point cloud data generation method and perception capability verification method of laser radar
CN111553342B (en) Visual positioning method, visual positioning device, computer equipment and storage medium
CN113376643A (en) Distance detection method and device and electronic equipment
CN112509126A (en) Method, device, equipment and storage medium for detecting three-dimensional object
CN117095038A (en) Point cloud filtering method and system for laser scanner
CN116129422A (en) Monocular 3D target detection method, monocular 3D target detection device, electronic equipment and storage medium
CN113763308B (en) Ground detection method, device, server and medium
WO2022214821A2 (en) Monocular depth estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination