CN113869268A - Obstacle ranging method and device, electronic equipment and readable medium - Google Patents
Obstacle ranging method and device, electronic equipment and readable medium Download PDFInfo
- Publication number
- CN113869268A CN113869268A CN202111188713.1A CN202111188713A CN113869268A CN 113869268 A CN113869268 A CN 113869268A CN 202111188713 A CN202111188713 A CN 202111188713A CN 113869268 A CN113869268 A CN 113869268A
- Authority
- CN
- China
- Prior art keywords
- obstacle
- vehicle
- image
- mounted camera
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000012545 processing Methods 0.000 claims abstract description 71
- 230000008569 process Effects 0.000 claims abstract description 25
- 230000004888 barrier function Effects 0.000 claims abstract description 21
- 238000000605 extraction Methods 0.000 claims abstract description 17
- 238000004891 communication Methods 0.000 claims description 18
- 230000004927 fusion Effects 0.000 claims description 15
- 230000008859 change Effects 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 8
- 230000007797 corrosion Effects 0.000 claims description 8
- 238000005260 corrosion Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 238000004519 manufacturing process Methods 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 12
- 239000011159 matrix material Substances 0.000 description 6
- 230000003628 erosive effect Effects 0.000 description 5
- 238000012935 Averaging Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000010339 dilation Effects 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Probability & Statistics with Applications (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the invention provides a method and a device for obstacle ranging, electronic equipment and a readable medium, wherein the method comprises the following steps: the method comprises the steps of acquiring obstacle images of multiple angles shot by a single vehicle-mounted camera in the driving process of a vehicle, acquiring external parameters of the vehicle-mounted camera when the obstacle images of multiple angles are shot, carrying out ground plane projection processing on the obstacle images according to the external parameters of the vehicle-mounted camera to obtain projection images, carrying out difference processing and contour extraction processing on the projection images, determining the grounding point of an obstacle, and calculating the distance between the obstacle and the vehicle according to the grounding point of the obstacle. By applying the embodiment of the invention, the position of the grounding point of the barrier can be determined through the images of a plurality of barriers at different angles shot by the single vehicle-mounted camera, so that the position of the barrier is detected, the distance between the barrier and the vehicle can be identified through the single vehicle-mounted camera, a laser radar or a plurality of cameras are not needed, and the manufacturing cost of the vehicle is reduced.
Description
Technical Field
The present invention relates to the field of vehicle technologies, and in particular, to a method for measuring a distance to an obstacle, an apparatus for measuring a distance to an obstacle, an electronic device, and a computer-readable medium.
Background
In the automatic driving process of a vehicle, a laser radar or a plurality of cameras on the vehicle are generally adopted to identify the position of an obstacle around the vehicle.
However, the laser radar is expensive, and when one obstacle is recognized by the cameras, at least two cameras are required to simultaneously photograph one obstacle to recognize the position of the obstacle, which results in high manufacturing cost of the vehicle.
Disclosure of Invention
The embodiment of the invention provides an obstacle ranging method, an obstacle ranging device, electronic equipment and a computer readable storage medium, and aims to solve the problem that the manufacturing cost of a vehicle is high when a laser radar or a plurality of cameras are adopted to identify the position of an obstacle.
The embodiment of the invention discloses a method for measuring the distance of an obstacle, which comprises the following steps:
in the running process of a vehicle, obtaining obstacle images of multiple angles shot by a single vehicle-mounted camera, and obtaining external parameters of the vehicle-mounted camera when the obstacle images of the multiple angles are shot;
according to external parameters of the vehicle-mounted camera, carrying out ground plane projection processing on the obstacle image to obtain a projected image;
carrying out pixel fusion on the projection image to obtain a background image;
performing difference processing on the projection images and the background images respectively to obtain barrier projection images retaining the barriers;
carrying out contour extraction processing on the obstacle projection image to obtain an obstacle contour image;
determining grounding points of the obstacles from the obstacle outline image;
and calculating the distance between the obstacle and the vehicle according to the grounding point of the obstacle.
Optionally, the acquiring obstacle images of multiple angles captured by the vehicle-mounted camera during the driving of the vehicle, and the external parameters of the vehicle-mounted camera when capturing the obstacle images of multiple angles includes:
acquiring reference external parameters of the vehicle-mounted camera; the reference external parameter is obtained by calibrating a first world coordinate system established by the vehicle-mounted camera based on the position of the intersection point of the vehicle rear axle center and the ground vertically downwards;
in the running process of a vehicle, obtaining obstacle images of multiple angles shot by the vehicle-mounted camera, and obtaining world coordinates of the vehicle-mounted camera in a second world coordinate system when shooting the obstacle images of multiple angles; the origin of the second world coordinate system is the position of the intersection point of the vertical downward center of the rear axle of the vehicle and the ground when the first obstacle image is shot;
and calculating the external parameters of the vehicle-mounted camera when the obstacle images of the plurality of angles are shot according to the reference external parameters and the world coordinates of the vehicle-mounted camera in the second world coordinate system.
Optionally, the performing ground plane projection processing on the obstacle image according to the external parameter of the vehicle-mounted camera to obtain a projected image includes:
acquiring internal parameters of the camera;
and carrying out ground plane projection processing on the obstacle image according to the internal parameters and the external parameters of the vehicle-mounted camera to obtain a projected image.
Optionally, the contour extraction of the obstacle projection image to obtain the obstacle contour image includes:
performing expansion corrosion treatment and density clustering treatment on the barrier projection image to obtain a change area image:
and carrying out Hough transform fitting processing on the pixel points of the image in the change area to obtain the obstacle contour image.
Optionally, the determining the grounding point of the obstacle from the obstacle outline image includes:
and taking the position where the obstacles are overlapped in each obstacle outline image as the grounding point of the obstacle.
Optionally, the calculating a distance between the obstacle and the vehicle according to the grounding point of the obstacle includes:
acquiring world coordinates of a grounding point of the vehicle in the second world coordinate system;
determining world coordinates of a grounding point of the obstacle in the second world coordinate system;
calculating a distance between the world coordinates of the second world coordinate system of the grounding point of the vehicle and the grounding point of the obstacle as the distance between the obstacle and the vehicle.
The embodiment of the invention discloses a distance measuring device for an obstacle, which comprises:
the vehicle-mounted camera comprises an image acquisition module, a display module and a control module, wherein the image acquisition module is used for acquiring obstacle images of multiple angles shot by a single vehicle-mounted camera in the driving process of a vehicle and acquiring external parameters of the vehicle-mounted camera when the obstacle images of the multiple angles are shot;
the image projection module is used for carrying out ground plane projection processing on the obstacle image according to external parameters of the vehicle-mounted camera to obtain a projected image;
the pixel fusion module is used for carrying out pixel fusion on the projection image to obtain a background image;
the image difference module is used for carrying out difference processing on the projection images and the background images respectively to obtain barrier projection images retaining the barriers;
the contour extraction module is used for carrying out contour extraction processing on the obstacle projection image to obtain an obstacle contour image;
the grounding point determining module is used for determining the grounding point of the obstacle from the obstacle outline image;
and the distance calculation module is used for calculating the distance between the obstacle and the vehicle according to the grounding point of the obstacle.
Optionally, the image acquisition module includes:
the parameter acquisition submodule is used for acquiring reference external parameters of the vehicle-mounted camera; the reference external parameter is obtained by calibrating a first world coordinate system established by the vehicle-mounted camera based on the position of the intersection point of the vehicle rear axle center and the ground vertically downwards;
the image acquisition sub-module is used for acquiring barrier images of multiple angles shot by the vehicle-mounted camera in the running process of the vehicle and world coordinates of the vehicle-mounted camera in a second world coordinate system when the barrier images of multiple angles are shot; the origin of the second world coordinate system is the position of the intersection point of the vertical downward center of the rear axle of the vehicle and the ground when the first obstacle image is shot;
and the parameter calculation module is used for calculating the external parameters of the vehicle-mounted camera when the obstacle images at the plurality of angles are shot according to the reference external parameters and the world coordinates of the vehicle-mounted camera in the second world coordinate system.
Optionally, the image projection module comprises:
the parameter acquisition submodule is used for acquiring internal parameters of the camera;
and the image projection submodule is used for carrying out ground plane projection processing on the obstacle image according to the internal parameters and the external parameters of the vehicle-mounted camera to obtain a projected image.
Optionally, the contour extraction module includes:
the first image processing submodule is used for performing expansion corrosion processing and density clustering processing on the obstacle projection image to obtain a change area image:
and the second image processing submodule is used for carrying out Hough transform fitting processing on the pixel points of the image in the change area to obtain the obstacle contour image.
Optionally, the ground point determining module includes:
and the grounding point determining submodule is used for taking the position of the superposition of the obstacles in each obstacle outline image as the grounding point of the obstacles.
Optionally, the distance calculating module includes:
the world coordinate acquisition submodule is used for acquiring world coordinates of a grounding point of the vehicle in the second world coordinate system;
the world coordinate acquisition submodule is used for determining the world coordinate of the grounding point of the obstacle in the second world coordinate system;
and the distance calculation submodule is used for calculating the distance between the world coordinates of the grounding point of the vehicle and the grounding point of the obstacle in the second world coordinate system as the distance between the obstacle and the vehicle.
The embodiment of the invention also discloses electronic equipment which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory finish mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method according to the embodiment of the present invention when executing the program stored in the memory.
Also disclosed are one or more computer-readable media having instructions stored thereon, which, when executed by one or more processors, cause the processors to perform a method according to an embodiment of the invention.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, in the process of driving of a vehicle, a plurality of angles of obstacle images shot by a single vehicle-mounted camera and external parameters of the vehicle-mounted camera when the plurality of angles of obstacle images are shot are obtained, the ground plane projection processing is carried out on the obstacle images according to the external parameters of the vehicle-mounted camera to obtain projection images, the projection images are subjected to pixel fusion to obtain background images, the projection images are respectively subjected to difference processing with the background images to obtain obstacle projection images retaining obstacles, the obstacle projection images are subjected to contour extraction processing to obtain obstacle contour images, the grounding points of the obstacles are determined from the obstacle contour images, and the distance between the obstacles and the vehicle is calculated according to the grounding points of the obstacles. By applying the embodiment of the invention, the position of the grounding point of the obstacle can be determined according to the images of a plurality of obstacles at different angles shot by a single vehicle-mounted camera on the vehicle, so that the position of the obstacle can be detected, the distance between the obstacle and the vehicle can be identified by the single vehicle-mounted camera, a laser radar or a plurality of cameras are not needed, and the manufacturing cost of the vehicle is reduced.
Drawings
Fig. 1 is a flowchart illustrating steps of a method for measuring distance of an obstacle according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating steps of another method for measuring distance to an obstacle according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of an obstacle projection image provided in an embodiment of the present invention;
FIG. 4 is a diagram of a background image provided in an embodiment of the invention;
FIG. 5 is a schematic illustration of a projection image differencing process provided in an embodiment of the invention;
FIG. 6 is a schematic diagram of expansion erosion and density clustering of a projected image provided in an embodiment of the present invention;
fig. 7 is a schematic diagram of a hough transform fitting process of a projection image provided in an embodiment of the present invention;
fig. 8 is a block diagram of an obstacle distance measuring device according to an embodiment of the present invention;
fig. 9 is a block diagram of an electronic device provided in an embodiment of the invention;
fig. 10 is a schematic diagram of a computer-readable medium provided in an embodiment of the invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
At present, when a single camera is used for shooting an image of an obstacle, only one side view of the obstacle can be obtained, and the position of a grounding point of the obstacle cannot be determined, so that a laser radar or at least two cameras are needed to determine the position of the obstacle on a vehicle, and further the manufacturing cost of the vehicle is high.
The embodiment of the invention provides an obstacle ranging method, an obstacle ranging device, electronic equipment and a readable medium.
Referring to fig. 1, a flowchart illustrating steps of a method for measuring distance of an obstacle provided in an embodiment of the present invention is shown, which may specifically include the following steps:
step 101: in the running process of a vehicle, obtaining obstacle images of multiple angles shot by a vehicle-mounted camera, and obtaining external parameters of the vehicle-mounted camera when shooting the obstacle images of multiple angles.
The external parameters are the conversion relation of a camera coordinate system relative to a world coordinate system and comprise a translation matrix and a rotation matrix.
Specifically, the embodiment of the invention is applied to a vehicle, and in the process of vehicle driving, the obstacle images of multiple angles shot by a single vehicle-mounted camera are obtained, for example, while the vehicle is driving, the images of obstacles in front of or around the vehicle are collected by the vehicle-mounted camera on the vehicle according to a certain sampling frequency, so that the obstacle images of multiple angles are obtained, the driving track of the vehicle is recorded, the external parameters of the vehicle-mounted camera when the image of the obstacle is shot are determined according to the driving track of the vehicle, and the external parameters corresponding to the vehicle-mounted camera when the image of the obstacle of multiple angles is shot are obtained.
Usually, when a first obstacle image is collected (photographed), the position of the vehicle-mounted camera or the position of a structure on the vehicle is used as the origin of the world coordinate system, so that the world coordinate of the vehicle-mounted camera in the world coordinate system is changed continuously during the driving of the vehicle, which results in that different world coordinates of the vehicle-mounted camera in the world coordinate system correspond to different external parameters, that is, the external parameters corresponding to the vehicle-mounted camera when the obstacle images at multiple angles are photographed are different.
Step 102: and carrying out ground plane projection processing on the obstacle image according to the external parameters of the vehicle-mounted camera to obtain a projected image.
Specifically, ground plane projection is carried out on the obtained obstacle image according to a camera shooting principle and a camera pinhole imaging model through external parameters and internal parameters of the vehicle-mounted camera corresponding to the obstacle image, a projection image corresponding to the obstacle image is obtained, a plurality of projection images are obtained, the internal parameters are obtained by calibration in advance, and the internal parameters are not changed along with the position change of the vehicle-mounted camera in a world coordinate system.
Step 103: and carrying out pixel fusion on the projection image to obtain a background image.
Specifically, the purpose of performing pixel fusion on a plurality of projection images is to remove an obstacle in the projection images, for example, superimposing and averaging gray values corresponding to the same pixel coordinate in the plurality of projection images to obtain a new background image, where the gray values of the obstacle are faded away when performing gray value averaging processing due to different positions of the obstacle in different projection images, so as to remove the obstacle in the projection images to obtain the background image.
Step 104: and performing difference processing on the projection images and the background images respectively to obtain obstacle projection images retaining the obstacles.
The difference processing is to subtract the corresponding pixel values of the two images to weaken the similar part of the images and highlight the changed part of the images.
Specifically, after a plurality of projection images are subjected to pixel fusion to obtain a background image, the plurality of projection images are subjected to difference processing with the background image respectively, that is, pixel values in the projection images are subtracted from pixel values in the background image, so that the background of the projection images is removed, and an obstacle projection image with obstacles is obtained.
Step 105: and carrying out contour extraction processing on the obstacle projection image to obtain an obstacle contour image.
Specifically, contour extraction processing, such as dilation erosion processing and hough transform processing, is performed on the obstacle projection image, and the contour of the obstacle is extracted from the obstacle projection image, so that an obstacle contour image in which the contour of the obstacle is retained is obtained.
Step 106: and determining the grounding point of the obstacle from the obstacle outline image.
Specifically, in different obstacle contour images, the positions occupied by the obstacles in the obstacle contour images are different, so that the difference areas of the plurality of obstacle contour images can be compared, and the position where the obstacles in the plurality of obstacle contour images coincide can be determined as the grounding point of the obstacle.
Step 107: and calculating the distance between the obstacle and the vehicle according to the grounding point of the obstacle.
Specifically, an image acquired by the door camera can be converted into a digital image (projection image) in an M × N array form in the computer, the column number and the row number (u, v) of each point in the image in the array are coordinates of the point in a pixel coordinate system, therefore, after the grounding point of the obstacle in the obstacle outline image is determined, the pixel coordinate of the grounding point of the obstacle in the obstacle outline image can be acquired, the distance between the obstacle and the vehicle is calculated based on the pixel coordinate of the obstacle, for example, the pixel coordinate is converted into a world coordinate, and then the distance between the obstacle and the vehicle is calculated according to the world coordinate of the vehicle and the world coordinate of the obstacle.
In the embodiment of the invention, the position of the grounding point of the obstacle can be determined according to the images of a plurality of obstacles at different angles shot by a single vehicle-mounted camera on the vehicle, so that the position of the obstacle is detected, the distance between the obstacle and the vehicle can be identified through the single vehicle-mounted camera, a laser radar or a plurality of cameras are not needed, and the manufacturing cost of the vehicle is reduced.
Referring to fig. 2, a flowchart illustrating steps of another obstacle distance measuring method provided in the embodiment of the present invention is shown, which may specifically include the following steps:
step 201: acquiring reference external parameters of the vehicle-mounted camera; the reference external parameters are obtained by calibrating a first world coordinate system established by the vehicle-mounted camera based on the position of the intersection point of the vehicle rear axle center and the ground vertically downwards.
Specifically, before the vehicle is used, the vehicle-mounted camera needs to be calibrated in advance, for example, a first world coordinate system is established through a position at an intersection point of a vehicle rear axle center and the ground vertically downward, and the vehicle-mounted camera is calibrated based on the first world coordinate system, so that a rotation matrix and a translation matrix of the vehicle-mounted camera coordinate system relative to the first world coordinate system, that is, reference external parameters of the vehicle-mounted camera, are obtained.
It should be noted that the origin of coordinates of the first world coordinate system may be a position of the vehicle-mounted camera or any position on the vehicle, and in the embodiment of the present invention, a position at an intersection of the vehicle rear axle center vertically downward and the ground is taken as an example for description, but the present invention is not limited thereto.
Step 202: in the running process of a vehicle, obtaining obstacle images of multiple angles shot by the vehicle-mounted camera, and obtaining world coordinates of the vehicle-mounted camera in a second world coordinate system when shooting the obstacle images of multiple angles; and the origin of the second world coordinate system is the position of the intersection point of the vertical downward center of the rear axle of the vehicle and the ground when the first obstacle image is shot.
Specifically, in the process of vehicle running, a plurality of angles of obstacle images shot by the vehicle-mounted camera are acquired, for example, while the vehicle runs, images of obstacles in front of or around the vehicle are acquired by the vehicle-mounted camera on the vehicle according to a certain sampling frequency, so that a plurality of angles of obstacle images are obtained, meanwhile, the running track of the vehicle is recorded, and when the images of the obstacles are shot according to the running track of the vehicle, the world coordinates of the vehicle-mounted camera in the second world coordinate system are determined.
Step 203: and calculating the external parameters of the vehicle-mounted camera when the obstacle images of the plurality of angles are shot according to the reference external parameters and the world coordinates of the vehicle-mounted camera in the second world coordinate system. And carrying out ground plane projection processing on the obstacle image according to the external parameters of the vehicle-mounted camera to obtain a projected image.
Specifically, the position of the intersection point of the vertical downward center of the rear axle of the vehicle and the ground when the first obstacle image is shot is used as the origin of the second world coordinate system, so that the external parameters of the vehicle-mounted camera when the first obstacle image is shot are the reference external parameters.
And calculating the displacement between the world coordinates of the vehicle-mounted camera when the subsequent obstacle images at a plurality of other angles are shot and the world coordinates of the vehicle-mounted camera when the first obstacle image is shot, and calculating external parameters corresponding to the vehicle-mounted camera when the subsequent obstacle images at a plurality of other angles are shot according to the displacement and reference external parameters (a translation matrix and a rotation matrix).
Step 204: and carrying out ground plane projection processing on the obstacle image according to the external parameters of the vehicle-mounted camera to obtain a projected image.
In an embodiment of the present invention, the step 204 includes: acquiring internal parameters of the camera; and carrying out ground plane projection processing on the obstacle image according to the internal parameters and the external parameters of the vehicle-mounted camera to obtain a projected image.
Specifically, internal parameters of the vehicle-mounted camera are obtained in advance through calibration, and ground plane projection is conducted on an original obstacle image according to a camera shooting principle and a camera pinhole imaging model through the internal parameters and the external parameters of the vehicle-mounted camera to obtain a corresponding projection image. Because the ground plane projection is carried out on the original obstacle image according to the external parameters corresponding to the vehicle-mounted camera, the pixel coordinates of the same position in the real world in different projection images are the same, ground projection can be carried out on the area where the obstacle is located for a plurality of projection images, the obtained projection images display the same area, but the angle and the position of the displayed obstacle are different. Referring to fig. 3, a schematic diagram of a projection image of an obstacle according to an embodiment of the present invention is shown, as can be seen, a plurality of projection images display an area where the obstacle is located, and the projection images display the same area, but the angle and the position of the displayed obstacle are different.
Step 205: and carrying out pixel fusion on the projection image to obtain a background image.
Specifically, the purpose of performing pixel fusion on a plurality of projection images is to remove an obstacle in the projection images, for example, superimposing and averaging gray values corresponding to the same pixel coordinate in the plurality of projection images to obtain a new background image, where the gray values of the obstacle are faded away when performing gray value averaging processing due to different positions of the obstacle in different projection images, so as to remove the obstacle in the projection images to obtain the background image. Referring to fig. 4, a schematic diagram of a background image provided in an embodiment of the present invention is shown, as can be seen, after a plurality of projection images are subjected to pixel fusion, a ground background image with obstacles removed can be obtained.
Step 206: and performing difference processing on the projection images and the background images respectively to obtain obstacle projection images retaining the obstacles.
Referring to fig. 5, a schematic diagram of projection image difference processing provided in an embodiment of the present invention is shown, in fig. 5, (a) is a projection image, (b) is a background image, and (c) is an obstacle projection image with an obstacle left, it can be known that, after the projection image and the background image are subjected to difference processing, an obstacle projection image with an obstacle left is obtained.
Step 207: and carrying out contour extraction processing on the obstacle projection image to obtain an obstacle contour image.
In an embodiment of the present invention, the step 207 includes: performing expansion corrosion treatment and density clustering treatment on the barrier projection image to obtain a change area image: and carrying out Hough transform fitting processing on the pixel points of the image in the change area to obtain the obstacle contour image.
The expansion corrosion treatment is used for effectively filtering noise and keeping original information in the image; the density clustering processing is used for inspecting the continuity between the samples from the perspective of the sample density and continuously expanding the clustering cluster based on the connectable samples to obtain a final clustering result; the hough transform fitting process is a feature extraction means for extracting the shape (straight line, circle, etc.) boundaries.
Specifically, after the obstacle projection image is obtained, expansion corrosion processing is carried out on the obstacle projection image, noise in the obstacle projection image is removed, density clustering processing is carried out on the obstacle projection image, dense points in the projection image form clustering clusters and are connected, Hough transform fitting processing is carried out on the obstacle projection image, the outline of the obstacle is extracted, and the obstacle outline image with the outline of the obstacle reserved is obtained. Referring to fig. 6, which shows a schematic diagram of expansion erosion and density clustering of a projection image provided in an embodiment of the present invention, in fig. 6, (a) is an obstacle projection image, and (b) is a change area image subjected to expansion erosion processing and density clustering processing; referring to fig. 7, a schematic diagram of a hough transform fitting process of a projection image provided in an embodiment of the present invention is shown, in fig. 7, (a) is a change region image subjected to dilation erosion processing and density clustering processing, and (b) is an obstacle contour image subjected to hough transform fitting processing.
Step 208: and determining the grounding point of the obstacle from the obstacle outline image.
In an embodiment of the present invention, the step 208 includes: and taking the position where the obstacles are overlapped in each obstacle outline image as the grounding point of the obstacle.
Specifically, in the obstacle contour images at different angles, the positions occupied by the obstacles in the obstacle contour images are different, so that a plurality of obstacle contour image difference areas can be compared, and the coinciding positions of the obstacles in the plurality of obstacle contour images are determined as the grounding points of the obstacles, for example, the positions occupied by the obstacles in the a contour image are 1, 2 and 3; positions occupied by the obstacles in the B contour image are 1, 4 and 5; then the coincidence position is 1 and then the 1 position is the obstacle earth location position.
Step 209: and calculating the distance between the obstacle and the vehicle according to the grounding point of the obstacle.
In an embodiment of the present invention, the step 209 includes: acquiring world coordinates of a grounding point of the vehicle in the second world coordinate system; determining world coordinates of a grounding point of the obstacle in the second world coordinate system; calculating a distance between the world coordinates of the second world coordinate system of the grounding point of the vehicle and the grounding point of the obstacle as the distance between the obstacle and the vehicle.
Specifically, an image acquired by a door camera is converted into a digital image (projection image) in an M × N array form in a computer, the column number and the row number (u, v) of each point in the image in the array are coordinates of the point in a pixel coordinate system, and after the grounding point of the obstacle in the obstacle outline image is determined, the pixel coordinates of the grounding point of the obstacle in the obstacle outline image can be acquired.
Before the vehicle is used, the world coordinates corresponding to the pixel coordinates of the grounding point of the obstacle in the area shot by the vehicle-mounted camera are stored in advance, so that the world coordinates of the grounding point of the obstacle in the second world coordinate system can be searched according to the pixel coordinates of the grounding point of the obstacle, the world coordinates of the grounding point of the vehicle in the second world coordinate system at the current moment can be obtained, and the distance between the grounding point of the vehicle and the world coordinates of the grounding point of the obstacle in the second world coordinate system can be calculated to be used as the distance between the obstacle and the vehicle.
In the embodiment of the invention, the position of the grounding point of the obstacle can be determined according to the images of a plurality of obstacles at different angles shot by a single vehicle-mounted camera on the vehicle, so that the position of the obstacle is detected, the distance between the obstacle and the vehicle can be identified through the single vehicle-mounted camera, a laser radar or a plurality of cameras are not needed, and the manufacturing cost of the vehicle is reduced.
Simultaneously, a single camera on the vehicle shoots the obstacle to obtain images of a plurality of obstacles at different angles, the images at the different angles are projected through a ground plane, and the grounding point of the obstacle in the projection is determined through difference processing, expansion corrosion processing, density clustering processing and Hough transformation fitting processing, so that the position where the obstacle appears can be accurately identified, and the accuracy of the distance between the obstacle and the vehicle obtained through calculation is improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 8, a block diagram of a structure of an obstacle distance measuring device provided in the embodiment of the present invention is shown, and specifically, the structure may include the following modules:
an image obtaining module 801, configured to obtain, during a vehicle driving process, obstacle images at multiple angles captured by a single vehicle-mounted camera, and external parameters of the vehicle-mounted camera when capturing the obstacle images at the multiple angles;
the image projection module 802 is configured to perform ground plane projection processing on the obstacle image according to external parameters of the vehicle-mounted camera to obtain a projected image;
a pixel fusion module 803, configured to perform pixel fusion on the projection image to obtain a background image;
an image difference module 804, configured to perform difference processing on the projection images and the background image, respectively, to obtain an obstacle projection image in which the obstacle is retained;
the contour extraction module 805 is configured to perform contour extraction processing on the obstacle projection image to obtain an obstacle contour image;
a grounding point determining module 806, configured to determine a grounding point of the obstacle from the obstacle contour image;
a distance calculating module 807 for calculating the distance between the obstacle and the vehicle according to the grounding point of the obstacle.
Optionally, the image acquiring module 801 includes:
the parameter acquisition submodule is used for acquiring reference external parameters of the vehicle-mounted camera; the reference external parameter is obtained by calibrating a first world coordinate system established by the vehicle-mounted camera based on the position of the intersection point of the vehicle rear axle center and the ground vertically downwards;
the image acquisition sub-module is used for acquiring barrier images of multiple angles shot by the vehicle-mounted camera in the running process of the vehicle and world coordinates of the vehicle-mounted camera in a second world coordinate system when the barrier images of multiple angles are shot; the origin of the second world coordinate system is the position of the intersection point of the vertical downward center of the rear axle of the vehicle and the ground when the first obstacle image is shot;
and the parameter calculation module is used for calculating the external parameters of the vehicle-mounted camera when the obstacle images at the plurality of angles are shot according to the reference external parameters and the world coordinates of the vehicle-mounted camera in the second world coordinate system.
Optionally, the image projection module 802 includes:
the parameter acquisition submodule is used for acquiring internal parameters of the camera;
and the image projection submodule is used for carrying out ground plane projection processing on the obstacle image according to the internal parameters and the external parameters of the vehicle-mounted camera to obtain a projected image.
Optionally, the contour extraction module 805 includes:
the first image processing submodule is used for performing expansion corrosion processing and density clustering processing on the obstacle projection image to obtain a change area image:
and the second image processing submodule is used for carrying out Hough transform fitting processing on the pixel points of the image in the change area to obtain the obstacle contour image.
Optionally, the ground point determining module 806 includes:
and the grounding point determining submodule is used for taking the position of the superposition of the obstacles in each obstacle outline image as the grounding point of the obstacles.
Optionally, the distance calculating module 807 includes:
the world coordinate acquisition submodule is used for acquiring world coordinates of a grounding point of the vehicle in the second world coordinate system;
the world coordinate acquisition submodule is used for determining the world coordinate of the grounding point of the obstacle in the second world coordinate system;
and the distance calculation submodule is used for calculating the distance between the world coordinates of the grounding point of the vehicle and the grounding point of the obstacle in the second world coordinate system as the distance between the obstacle and the vehicle.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
In addition, an embodiment of the present invention further provides an electronic device, as shown in fig. 9, which includes a processor 901, a communication interface 902, a memory 903, and a communication bus 904, where the processor 901, the communication interface 902, and the memory 903 complete mutual communication through the communication bus 904,
a memory 903 for storing computer programs;
the processor 901 is configured to implement the obstacle detection method described in the above embodiment when executing the program stored in the memory 903.
The communication bus mentioned in the above terminal may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment provided by the present invention, as shown in fig. 10, there is further provided a computer-readable storage medium 1001, which stores instructions that, when executed on a computer, cause the computer to perform the obstacle detection method described in the above-mentioned embodiment.
In a further embodiment provided by the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the obstacle detection method described in the above embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (10)
1. An obstacle ranging method, comprising:
in the running process of a vehicle, obtaining obstacle images of multiple angles shot by a single vehicle-mounted camera, and obtaining external parameters of the vehicle-mounted camera when the obstacle images of the multiple angles are shot;
according to external parameters of the vehicle-mounted camera, carrying out ground plane projection processing on the obstacle image to obtain a projected image;
carrying out pixel fusion on the projection image to obtain a background image;
performing difference processing on the projection images and the background images respectively to obtain barrier projection images retaining the barriers;
carrying out contour extraction processing on the obstacle projection image to obtain an obstacle contour image;
determining grounding points of the obstacles from the obstacle outline image;
and calculating the distance between the obstacle and the vehicle according to the grounding point of the obstacle.
2. The method according to claim 1, wherein the acquiring of the obstacle images of the plurality of angles captured by the vehicle-mounted camera during the driving of the vehicle and the external parameters of the vehicle-mounted camera when capturing the obstacle images of the plurality of angles comprises:
acquiring reference external parameters of the vehicle-mounted camera; the reference external parameter is obtained by calibrating a first world coordinate system established by the vehicle-mounted camera based on the position of the intersection point of the vehicle rear axle center and the ground vertically downwards;
in the running process of a vehicle, obtaining obstacle images of multiple angles shot by the vehicle-mounted camera, and obtaining world coordinates of the vehicle-mounted camera in a second world coordinate system when shooting the obstacle images of multiple angles; the origin of the second world coordinate system is the position of the intersection point of the vertical downward center of the rear axle of the vehicle and the ground when the first obstacle image is shot;
and calculating the external parameters of the vehicle-mounted camera when the obstacle images of the plurality of angles are shot according to the reference external parameters and the world coordinates of the vehicle-mounted camera in the second world coordinate system.
3. The method according to claim 1, wherein the performing ground plane projection processing on the obstacle image according to external parameters of the vehicle-mounted camera to obtain a projected image comprises:
acquiring internal parameters of the camera;
and carrying out ground plane projection processing on the obstacle image according to the internal parameters and the external parameters of the vehicle-mounted camera to obtain a projected image.
4. The method according to claim 1, wherein the contour extracting the obstacle projection image to obtain the obstacle contour image comprises:
performing expansion corrosion treatment and density clustering treatment on the barrier projection image to obtain a change area image:
and carrying out Hough transform fitting processing on the pixel points of the image in the change area to obtain the obstacle contour image.
5. The method of claim 1, wherein said determining a grounding point of said obstacle from said obstacle profile image comprises:
and taking the position where the obstacles are overlapped in each obstacle outline image as the grounding point of the obstacle.
6. The method of claim 2, wherein said calculating a distance of the obstacle from the vehicle based on the grounding point of the obstacle comprises:
acquiring world coordinates of a grounding point of the vehicle in the second world coordinate system;
determining world coordinates of a grounding point of the obstacle in the second world coordinate system;
calculating a distance between the world coordinates of the second world coordinate system of the grounding point of the vehicle and the grounding point of the obstacle as the distance between the obstacle and the vehicle.
7. An obstacle ranging apparatus, comprising:
the vehicle-mounted camera comprises an image acquisition module, a display module and a control module, wherein the image acquisition module is used for acquiring obstacle images of multiple angles shot by a single vehicle-mounted camera in the driving process of a vehicle and acquiring external parameters of the vehicle-mounted camera when the obstacle images of the multiple angles are shot;
the image projection module is used for carrying out ground plane projection processing on the obstacle image according to external parameters of the vehicle-mounted camera to obtain a projected image;
the pixel fusion module is used for carrying out pixel fusion on the projection image to obtain a background image;
the image difference module is used for carrying out difference processing on the projection images and the background images respectively to obtain barrier projection images retaining the barriers;
the contour extraction module is used for carrying out contour extraction processing on the obstacle projection image to obtain an obstacle contour image;
the grounding point determining module is used for determining the grounding point of the obstacle from the obstacle outline image;
and the distance calculation module is used for calculating the distance between the obstacle and the vehicle according to the grounding point of the obstacle.
8. The apparatus of claim 7, wherein the image acquisition module comprises:
the parameter acquisition submodule is used for acquiring reference external parameters of the vehicle-mounted camera; the reference external parameter is obtained by calibrating a first world coordinate system established by the vehicle-mounted camera based on the position of the intersection point of the vehicle rear axle center and the ground vertically downwards;
the image acquisition sub-module is used for acquiring barrier images of multiple angles shot by the vehicle-mounted camera in the running process of the vehicle and world coordinates of the vehicle-mounted camera in a second world coordinate system when the barrier images of multiple angles are shot; the origin of the second world coordinate system is the position of the intersection point of the vertical downward center of the rear axle of the vehicle and the ground when the first obstacle image is shot;
and the parameter calculation module is used for calculating the external parameters of the vehicle-mounted camera when the obstacle images at the plurality of angles are shot according to the reference external parameters and the world coordinates of the vehicle-mounted camera in the second world coordinate system.
9. An electronic device, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus;
the memory is used for storing a computer program;
the processor, when executing a program stored on the memory, implementing the method of any of claims 1-6.
10. One or more computer-readable media having instructions stored thereon that, when executed by one or more processors, cause the processors to perform the method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111188713.1A CN113869268A (en) | 2021-10-12 | 2021-10-12 | Obstacle ranging method and device, electronic equipment and readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111188713.1A CN113869268A (en) | 2021-10-12 | 2021-10-12 | Obstacle ranging method and device, electronic equipment and readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113869268A true CN113869268A (en) | 2021-12-31 |
Family
ID=78999251
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111188713.1A Pending CN113869268A (en) | 2021-10-12 | 2021-10-12 | Obstacle ranging method and device, electronic equipment and readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113869268A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114018215A (en) * | 2022-01-04 | 2022-02-08 | 智道网联科技(北京)有限公司 | Monocular distance measuring method, device, equipment and storage medium based on semantic segmentation |
CN115817463A (en) * | 2023-02-23 | 2023-03-21 | 禾多科技(北京)有限公司 | Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium |
CN116543032A (en) * | 2023-07-06 | 2023-08-04 | 中国第一汽车股份有限公司 | Impact object ranging method, device, ranging equipment and storage medium |
CN116883478A (en) * | 2023-07-28 | 2023-10-13 | 广州瀚臣电子科技有限公司 | Obstacle distance confirmation system and method based on automobile camera |
-
2021
- 2021-10-12 CN CN202111188713.1A patent/CN113869268A/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114018215A (en) * | 2022-01-04 | 2022-02-08 | 智道网联科技(北京)有限公司 | Monocular distance measuring method, device, equipment and storage medium based on semantic segmentation |
CN115817463A (en) * | 2023-02-23 | 2023-03-21 | 禾多科技(北京)有限公司 | Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium |
CN116543032A (en) * | 2023-07-06 | 2023-08-04 | 中国第一汽车股份有限公司 | Impact object ranging method, device, ranging equipment and storage medium |
CN116543032B (en) * | 2023-07-06 | 2023-11-21 | 中国第一汽车股份有限公司 | Impact object ranging method, device, ranging equipment and storage medium |
CN116883478A (en) * | 2023-07-28 | 2023-10-13 | 广州瀚臣电子科技有限公司 | Obstacle distance confirmation system and method based on automobile camera |
CN116883478B (en) * | 2023-07-28 | 2024-01-23 | 广州瀚臣电子科技有限公司 | Obstacle distance confirmation system and method based on automobile camera |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113869268A (en) | Obstacle ranging method and device, electronic equipment and readable medium | |
Akagic et al. | Pothole detection: An efficient vision based method using rgb color space image segmentation | |
CN111222395A (en) | Target detection method and device and electronic equipment | |
CN112230241B (en) | Calibration method based on random scanning type radar | |
CN104933398B (en) | vehicle identification system and method | |
CN112598922B (en) | Parking space detection method, device, equipment and storage medium | |
CN112651359A (en) | Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium | |
CN111986214B (en) | Construction method of pedestrian crossing in map and electronic equipment | |
CN106803262A (en) | The method that car speed is independently resolved using binocular vision | |
CN112036359B (en) | Method for obtaining topological information of lane line, electronic device and storage medium | |
CN108319931B (en) | Image processing method and device and terminal | |
CN112683228A (en) | Monocular camera ranging method and device | |
CN110673607B (en) | Feature point extraction method and device under dynamic scene and terminal equipment | |
CN112634368A (en) | Method and device for generating space and OR graph model of scene target and electronic equipment | |
CN115578468A (en) | External parameter calibration method and device, computer equipment and storage medium | |
CN114118253B (en) | Vehicle detection method and device based on multi-source data fusion | |
CN112001880A (en) | Characteristic parameter detection method and device for planar component | |
CN115239969A (en) | Road disease detection method and device, electronic equipment and storage medium | |
KR101995466B1 (en) | Stereo image matching based on feature points | |
CN113793413A (en) | Three-dimensional reconstruction method and device, electronic equipment and storage medium | |
KR102065337B1 (en) | Apparatus and method for measuring movement information of an object using a cross-ratio | |
CN114863096B (en) | Semantic map construction and positioning method and device for indoor parking lot | |
CN114004876A (en) | Dimension calibration method, dimension calibration device and computer readable storage medium | |
CN113820698B (en) | Obstacle ranging method, obstacle ranging device, electronic equipment and readable medium | |
CN113777618B (en) | Obstacle ranging method, obstacle ranging device, electronic equipment and readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |