CN113537047A - Obstacle detection method, obstacle detection device, vehicle and storage medium - Google Patents

Obstacle detection method, obstacle detection device, vehicle and storage medium Download PDF

Info

Publication number
CN113537047A
CN113537047A CN202110795686.8A CN202110795686A CN113537047A CN 113537047 A CN113537047 A CN 113537047A CN 202110795686 A CN202110795686 A CN 202110795686A CN 113537047 A CN113537047 A CN 113537047A
Authority
CN
China
Prior art keywords
camera
environment image
interest
obstacle
characteristic point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110795686.8A
Other languages
Chinese (zh)
Inventor
赵德力
姚绍威
谷靖
陈君毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Huitian Aerospace Technology Co Ltd
Original Assignee
Guangdong Huitian Aerospace Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Huitian Aerospace Technology Co Ltd filed Critical Guangdong Huitian Aerospace Technology Co Ltd
Priority to CN202110795686.8A priority Critical patent/CN113537047A/en
Publication of CN113537047A publication Critical patent/CN113537047A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses an obstacle detection method and device, a vehicle and a storage medium, and belongs to the technical field of vehicle-mounted terminals. The method is applied to a control terminal in a vehicle, the vehicle further comprises a binocular camera, the binocular camera comprises a first camera and a second camera, and the method comprises the following steps: acquiring a first environment image through a first camera, and acquiring a second environment image through a second camera; performing obstacle detection on the first environment image and the second environment image to obtain a first interested area of the target obstacle in the first environment image and a second interested area of the second environment image; according to the method and the device, the three-dimensional space coordinates of the target barrier in the camera coordinate system can be determined by identifying the interested areas in each environment image and acquiring the feature point pairs through the interested areas, so that the data volume required to be calculated in the process of determining the target barrier is reduced, and the efficiency of detecting the target barrier is improved.

Description

Obstacle detection method, obstacle detection device, vehicle and storage medium
Technical Field
The present disclosure relates to the field of vehicle terminals, and in particular, to a method and an apparatus for detecting obstacles, a vehicle, and a storage medium.
Background
With the rapid development of scientific technology, people increasingly use vehicles (such as vehicles) in daily life. In the aspect of automatic driving, survey of the surrounding environment by the vehicle capable of realizing automatic driving is particularly important, and in the driving process of the automatic driving vehicle, the vehicle-mounted terminal is required to control the camera to shoot the surrounding environment of the vehicle in real time, analyze the surrounding environment and ensure the driving safety of the vehicle. Currently, in the binocular vision sensing technology, a vehicle uses a binocular camera to collect an image of a surrounding environment where the vehicle is located, a disparity map is generated by a disparity calculation method (such as a Semi-Global Matching (SGM) algorithm), and the position of an obstacle in the image is determined according to the disparity map, so that the effect of detecting the obstacle is achieved.
In the technical scheme, in the parallax calculation method, each pixel point of the whole image needs to be calculated, local or global parallax optimization is needed, a large amount of calculation resources need to be consumed, and therefore the calculation process is complicated and the detection efficiency of the obstacle is low.
Disclosure of Invention
The embodiment of the application provides an obstacle detection method, an obstacle detection device, a vehicle and a storage medium, which can reduce the calculation amount in the process of detecting the position of an obstacle and improve the efficiency of obstacle detection.
In one aspect, an embodiment of the present application provides an obstacle detection method, which is applied to a control terminal in a vehicle, where the vehicle further includes a binocular camera, the binocular camera includes a first camera and a second camera, and the method includes:
acquiring a first environment image through the first camera;
acquiring a second environment image through the second camera;
respectively carrying out obstacle detection on the first environment image and the second environment image to obtain a first interested area of a target obstacle in the first environment image and a second interested area of the second environment image;
respectively extracting a first characteristic point of the first interested area and a second characteristic point of the second interested area;
matching the first characteristic points with the second characteristic points to obtain a plurality of matched characteristic point pairs;
and determining the three-dimensional space coordinates of the target obstacle in a camera coordinate system according to each characteristic point pair, wherein the camera coordinate system is the coordinate system of any one of the first camera and the second camera.
Optionally, the method further includes:
acquiring respective category information of the first region of interest and the second region of interest, wherein the category information is used for indicating the category of an obstacle corresponding to the region of interest;
and when the similarity between the respective category information of the first region of interest and the second region of interest is greater than a first threshold value, performing the steps of extracting a first feature point of the first region of interest and a second feature point of the second region of interest respectively, and matching the first feature point with the second feature point to obtain a plurality of matched feature point pairs.
Optionally, the method further includes:
if a first interest area corresponding to the target obstacle is detected only in the first environment image and a second interest area corresponding to the target obstacle is not detected in the second environment image, discarding the first interest area and determining that the target obstacle is not detected; or
If the second interest corresponding to the target obstacle is detected only in the second environment image and the first interest area corresponding to the target obstacle is not detected in the first environment image, discarding the second interest area and determining that the target obstacle is not detected.
Optionally, the matching the first feature point and the second feature point to obtain a plurality of matched feature point pairs includes:
matching the image coordinates of the first characteristic points and the second characteristic points to obtain matching point pairs;
and carrying out epipolar constraint processing on each matching point pair, and acquiring each matching point pair meeting the epipolar constraint as each characteristic point pair.
Optionally, the obtaining each feature point pair according to the matching of the image coordinates of each first feature point and each second feature point includes:
respectively calculating a first descriptor of each first characteristic point and a second descriptor of each second characteristic point;
and matching according to the first descriptor of each first characteristic point and the second descriptor of each second characteristic point to obtain each matching point pair.
Optionally, the determining, according to each of the feature point pairs, three-dimensional space coordinates of the target obstacle in a camera coordinate system includes;
calculating three-dimensional space coordinates of a target characteristic point pair in the camera coordinate system according to image coordinates of a first characteristic point and a second characteristic point contained in the target characteristic point pair and camera parameters of the first camera and the second camera, wherein the target characteristic point pair is any one of the matched characteristic point pairs;
and determining the three-dimensional space coordinates of the target obstacle in the camera coordinate system according to the three-dimensional space coordinates of each feature point pair in the camera coordinate system.
Optionally, the method further includes:
and correcting the first environment image and the second environment image according to the internal parameters and the external parameters of the binocular camera, so that the coordinates of the vertical coordinates of any point in the space in the first environment image and the second environment image are the same.
In another aspect, an embodiment of the present application provides an obstacle detection device, which is applied to a control terminal in a vehicle, where the vehicle further includes a binocular camera, the binocular camera includes a first camera and a second camera, and the device includes:
the first acquisition module is used for acquiring a first environment image through the first camera;
the second acquisition module is used for acquiring a second environment image through the second camera;
the area acquisition module is used for respectively carrying out obstacle detection on the first environment image and the second environment image to obtain a first interested area of a target obstacle in the first environment image and a second interested area of the second environment image;
the characteristic point extraction module is used for respectively extracting a first characteristic point of the first interested area and a second characteristic point of the second interested area;
the characteristic point matching module is used for matching the first characteristic point with the second characteristic point to obtain a plurality of matched characteristic point pairs;
and the obstacle determining module is used for determining the three-dimensional space coordinates of the target obstacle in a camera coordinate system according to the characteristic point pairs, wherein the camera coordinate system is the coordinate system of any one of the first camera and the second camera.
In another aspect, an embodiment of the present application provides a vehicle, which includes a control terminal, where the control terminal includes a memory and a processor, where the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to implement the obstacle detection method according to the above aspect and any optional implementation manner thereof.
In another aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the obstacle detection method according to the above another aspect and its alternatives.
The technical scheme provided by the embodiment of the application can at least comprise the following beneficial effects:
acquiring a first environment image through a first camera, and acquiring a second environment image through a second camera; performing obstacle detection on the first environment image and the second environment image to obtain a first interested area of the target obstacle in the first environment image and a second interested area of the second environment image; respectively extracting a first characteristic point of the first interested area and a second characteristic point of the second interested area, and matching the first characteristic point with the second characteristic point to obtain a plurality of matched characteristic point pairs; and determining the three-dimensional space coordinates of the target obstacle in a camera coordinate system according to the characteristic point pairs, wherein the camera coordinate system is the coordinate system of any one of the first camera and the second camera. According to the method and the device, the interesting regions in the environment images are identified, and the characteristic point pairs are obtained through the interesting regions, so that the three-dimensional space coordinates of the target obstacle in the camera coordinate system are determined, other pixel points except the interesting regions in the first environment image and the second environment image do not need to be calculated, the data volume needing to be calculated in the process of determining the target obstacle is reduced, and the efficiency of detecting the target obstacle is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic diagram illustrating an image capturing principle of a binocular camera according to an exemplary embodiment of the present application;
FIG. 2 is a flowchart of a method of obstacle detection provided by an exemplary embodiment of the present application;
FIG. 3 is a flowchart of a method of obstacle detection provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of an environment image after being divided according to an exemplary embodiment of the present application;
FIG. 5 is a spatial coordinate relational architecture diagram according to an exemplary embodiment of the present application;
fig. 6 is a block diagram of an obstacle detection device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It should be noted that the terms "first", "second", "third" and "fourth", etc. in the description and claims of the present application are used for distinguishing different objects, and are not used for describing a specific order. The terms "comprises," "comprising," and "having," and any variations thereof, of the embodiments of the present application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The scheme provided by the application can be used in an automatic driving scene of a vehicle in daily life, and for convenience of understanding, the application architecture related to the embodiment of the application is briefly introduced below.
In daily life, various vehicles have been used in various fields, such as various vehicles, airplanes, aircrafts, trains, and the like. With the development of science and technology, vehicles are also gradually developed towards automatic driving, in the field of automatic driving, vehicles can meet various obstacles, such as other vehicles, pedestrians, telegraph poles, tall buildings and other obstacles, in the driving process, the sensing of the vehicles to the surrounding environment is particularly important, and in the aspect of environment sensing, image acquisition can be carried out through hardware devices such as cameras and the like at present, data operation is carried out, and the effect of obstacle detection is achieved. For example, a common image capturing device may include a binocular camera, a monocular camera, and the like.
Taking a vehicle as an example, please refer to fig. 1, which shows a schematic structural diagram of an image capturing principle of a binocular camera according to an exemplary embodiment of the present application. As shown in fig. 1, the scene includes a first camera 110, a second camera 120, an obstacle 130, and a baseline 140.
Wherein the optical center of the first camera 110 and the optical center of the second camera 120 are on the baseline 140. The vehicle may be equipped with a first camera 110 and a second camera 120, and when data acquisition needs to be performed on the surrounding environment during automatic driving, the first camera 110 and the second camera 120 acquire corresponding environment images, and perform analysis and calculation on the respective environment images of the two cameras to determine the position coordinates where the obstacles around the vehicle are located. Optionally, as a new type of automobile, the flying automobile mostly runs based on autopilot at present, and during the running process of the flying automobile, various obstacles, such as other aircrafts, telegraph poles, tall buildings and the like, are encountered.
In the various automobiles, the binocular vision technology is mainly used for generating a disparity map by a conventional disparity calculation method, and generating each point in three-dimensional coordinates for obstacle perception. In terms of calculation of the disparity map, the position deviation of pixels imaged under two cameras in the same scene is generally horizontally placed by the two binocular cameras, so the position deviation is generally reflected in the horizontal direction. For example, after the P point on the obstacle in the scene of fig. 1 is imaged in the first camera, the image coordinates are (x, y) coordinates, and assuming that the positional deviation between the binocular cameras is d, then after imaging in the second camera, the image coordinates are (x-d, y) coordinates, and d may represent the value of the (x, y) coordinate point in the environment image acquired by the first camera in the disparity map.
Although the real-time effect can be achieved after the hardware of the Field-Programmable Gate Array (FPGA) is accelerated, the related hardware programming and configuration needs a large amount of work, and all functions of the Field-Programmable Gate Array depend on the hardware, and the operations such as branch condition jumping cannot be achieved, so that the problems of complex computing process and low position determination efficiency are caused.
In order to reduce the calculation amount in the process of determining the position of the obstacle and improve the efficiency of determining the position of the obstacle, the method and the device for determining the position of the obstacle solve the problem that when the obstacle is determined, the area of interest of each image acquired by a binocular camera is divided in advance, the obstacle is determined according to the area of interest, unnecessary data operation is reduced, and the efficiency of determining the position of the obstacle is improved.
Referring to fig. 2, a flowchart of a method for detecting an obstacle according to an exemplary embodiment of the present application is shown. The obstacle detection method may be applied to a control terminal in a vehicle in the scene architecture shown in fig. 1, where the vehicle further includes a binocular camera, the binocular camera includes a first camera and a second camera, and the method may be executed by the control terminal. As shown in fig. 2, the obstacle detection method may include several steps as follows.
Step 201, a first environment image is acquired through a first camera.
Step 202, a second environment image is acquired through a second camera.
Optionally, the vehicle may be a vehicle, the control terminal in the vehicle may be a vehicle-mounted terminal in the vehicle, the vehicle controls the binocular camera to perform image acquisition through the vehicle-mounted terminal, an image acquired by the first camera in the binocular camera is a first environment image, and an image acquired by the second camera is a second environment image.
Step 203, respectively performing obstacle detection on the first environment image and the second environment image to obtain a first interested area of the target obstacle in the first environment image and a second interested area of the second environment image.
Optionally, a barrier detection module may be set in the vehicle-mounted terminal in advance, and in this step, the barrier detection module may perform barrier detection on the acquired first environment image and the acquired second environment image to obtain a first region of interest of the target barrier in the first environment image and a second region of interest of the target barrier in the second environment image. The target obstacle may be an obstacle that can be detected by the obstacle detection module. For example, if the obstacle detection module is used to detect the position of an obstacle such as a pedestrian, a tall building, an aircraft, etc. in the environment image, the target obstacle may be a pedestrian, a tall building, an aircraft, etc.
Alternatively, the region of interest may be a location region where the target obstacle is located in the environment image. That is, the position area where the target obstacle is located in the first environment image is the first region of interest of the first environment image, and the position area where the target obstacle is located in the second environment image is the second region of interest of the second environment image.
Step 204, respectively extracting a first feature point of the first region of interest and a second feature point of the second region of interest.
Optionally, the vehicle-mounted terminal may extract feature points from the first region of interest to obtain each first feature point in the first region of interest, may also extract feature points from the second region of interest to obtain each second feature point in the second region of interest, and matches each first feature point with each second feature point to obtain a plurality of matched feature point pairs.
Optionally, when extracting the first feature point for the first region of interest, the first feature point may be extracted in the manner of a color feature, a texture feature, a gradient feature, a shape feature, and the like in the first region of interest, and when extracting the second feature point for the second region of interest, the second feature point may also be extracted in the same manner, so as to obtain each first feature point in the first region of interest and each second feature point in the second region of interest.
Step 205, matching the first characteristic point with the second characteristic point to obtain a plurality of matched characteristic point pairs.
Optionally, the vehicle-mounted terminal may further match the obtained first feature points and the second feature points one by one according to a preset matching manner. For example, the preset matching manner may be matching according to position coordinates of each feature point in each environment image, or matching according to description information of each feature point, where the description information is information describing attributes of the feature points in the feature point extraction method.
And step 206, determining the three-dimensional space coordinates of the target obstacle in a camera coordinate system according to the characteristic point pairs, wherein the camera coordinate system is the coordinate system of any one of the first camera and the second camera.
Optionally, the vehicle-mounted terminal may calculate, according to each obtained feature point pair, a three-dimensional space coordinate of a feature point corresponding to each feature point pair in a coordinate system of any one of the first camera or the second camera, so as to determine a three-dimensional space coordinate of the target obstacle in the coordinate system of any one of the first camera or the second camera.
In summary, a first environment image is acquired through the first camera, and a second environment image is acquired through the second camera; performing obstacle detection on the first environment image and the second environment image to obtain a first interested area of the target obstacle in the first environment image and a second interested area of the second environment image; respectively extracting a first characteristic point of the first interested area and a second characteristic point of the second interested area, and matching the first characteristic point with the second characteristic point to obtain a plurality of matched characteristic point pairs; and determining the three-dimensional space coordinates of the target obstacle in a camera coordinate system according to the characteristic point pairs, wherein the camera coordinate system is the coordinate system of any one of the first camera and the second camera. According to the method and the device, the interesting regions in the environment images are identified, and the characteristic point pairs are obtained through the interesting regions, so that the three-dimensional space coordinates of the target obstacle in the camera coordinate system are determined, other pixel points except the interesting regions in the first environment image and the second environment image do not need to be calculated, the data volume needing to be calculated in the process of determining the target obstacle is reduced, and the efficiency of detecting the target obstacle is improved.
In a possible implementation manner, the obstacle detection module may further have a function of identifying the type of the region of interest, and when the obstacle detection is performed on the environment image to acquire the region of interest, the obstacle detection module may acquire the region of interest according to the type information, so that redundant operations performed on different types are avoided, and the efficiency of determining the position of the target obstacle is improved.
Referring to fig. 3, a flowchart of a method for detecting an obstacle according to an exemplary embodiment of the present application is shown. The obstacle detection method may be applied to a control terminal in a vehicle in the scene architecture shown in fig. 1, where the vehicle further includes a binocular camera, the binocular camera includes a first camera and a second camera, and the method may be executed by the control terminal. As shown in fig. 3, the obstacle detection method may include several steps as follows.
Step 301, a first environment image is collected through a first camera.
Step 302, a second environment image is acquired through a second camera.
In the application, the vehicle can be a vehicle, the control terminal in the vehicle can be a vehicle-mounted terminal in the vehicle, the vehicle-mounted terminal can acquire images of the surrounding environment in real time in the driving process of the vehicle, the first camera is controlled to acquire a first environment image, and the second camera is controlled to acquire a second environment image. The two cameras of the binocular camera formed by the first camera and the second camera can shoot images simultaneously when shooting the environment images.
Step 303, correcting the first environment image and the second environment image according to the internal parameters and the external parameters of the binocular camera, so that the coordinates of the vertical coordinates of any point in the space in the first environment image and the second environment image are the same.
Optionally, the vehicle-mounted terminal may correct the acquired first environment image and the acquired second environment image according to the current internal reference and external reference of the binocular camera, so that a coordinate value of a vertical coordinate of any point in the space outside the vehicle in the image coordinate system of the first environment image after being captured by the first camera is the same as a coordinate value of a vertical coordinate of the image coordinate system of the second environment image after being captured by the second camera. That is, points in space are the same on the ordinate between corresponding image points on the two images, but differ on the abscissa.
Optionally, the current internal parameters of the binocular camera may include an internal parameter matrix, a distortion coefficient matrix, and the like, and the external parameters may include a translation vector and a rotation matrix of the second camera with respect to the first camera.
Step 304, performing obstacle detection on the first environment image and the second environment image to obtain a first interested area of the target obstacle in the first environment image and a second interested area of the second environment image.
Optionally, the vehicle-mounted terminal performs obstacle detection on the first environment image and the second environment image respectively through the obstacle detection module, segments the target obstacle in the first environment image from the first environment image, and segments the target obstacle in the second environment image from the second environment image, so as to obtain a first region of interest of the target obstacle in the first environment image and a second region of interest of the target obstacle in the second environment image.
In a possible implementation manner, the vehicle-mounted terminal may further obtain respective category information of the first region of interest and the second region of interest, where the category information is used to indicate a category of an obstacle corresponding to the region of interest; and when the similarity between the respective category information of the first region of interest and the second region of interest is greater than a first threshold value, respectively extracting a first characteristic point of the first region of interest and a second characteristic point of the second region of interest from the first region of interest and the second region of interest, and matching the first characteristic point with the second characteristic point to obtain a plurality of matched characteristic point pairs.
For example, after the vehicle-mounted terminal inputs the first environment image into the obstacle detection module, the obstacle detection module may further obtain the type of each obstacle in the first environment image, after the vehicle-mounted terminal inputs the first environment image into the obstacle detection module, the obstacle detection module may also obtain the type of each obstacle in the second environment image, when the obstacle detection module detects that the type of each obstacle in the first environment image belongs to the type of the target obstacle, each obstacle may be divided to obtain a plurality of first interest regions, and correspondingly, when the obstacle detection module detects that the type of each obstacle in the second environment image belongs to the type of the target obstacle, each obstacle may also be divided to obtain a plurality of second interest regions.
Please refer to fig. 4, which illustrates an image diagram after an environmental image is divided according to an exemplary embodiment of the present application. As shown in fig. 4, a region of interest 401, a first corner point 402, is included in the environment image 400. The vehicle-mounted terminal 400 can recognize that the obstacle in the environment image 400 is a helicopter through the obstacle detection module, and segment the obstacle to obtain an area of interest 401. If fig. 4 includes a plurality of obstacles, a plurality of regions of interest will be obtained by segmentation, which is not described herein again.
Alternatively, each region of interest may be represented by a corner point coordinate and the length and width of the region. For example, for the above fig. 4, the coordinates of the region of interest 401 are (a, b, w, h), where a and b are the abscissa and ordinate values of the first corner 402 of the region of interest 401, respectively, w represents the length of the region of interest 401 in the environment image 400, and h represents the width of the region of interest 401 in the environment image 400.
Optionally, the obstacle detection module mentioned above may be a template matching method based on a conventional algorithm, or may be a target detection method or an image segmentation method based on deep learning.
Alternatively, for each region of interest in the first environment image, the in-vehicle terminal may represent the I _ leftROI in the following manner1,I_leftROI2,I_leftROI3……I_leftROIn,I_leftROIn+1And so on. Wherein each I _ leftROI1Representing a region of interest, similarly for each region of interest in the second ambient image, the I _ right roi may be followed1,I_rightROI2,I_rightROI3……I_rightROIn,I_rightROIn+1And so on.
In a possible implementation manner, if the vehicle-mounted terminal detects only the first interest region corresponding to the target obstacle in the first environment image and does not detect the second interest region corresponding to the target obstacle in the second environment image, the vehicle-mounted terminal discards the first interest region and determines that the target obstacle is not detected. That is, after the vehicle-mounted terminal detects the first environment image, the vehicle-mounted terminal detects the region of interest corresponding to the first target obstacle in the first environment image, but after the vehicle-mounted terminal detects the second environment image, the same region of interest corresponding to the first target obstacle is not detected in the second environment image, which indicates that the first target obstacle is captured by the first camera but is not captured by the second camera, the vehicle-mounted terminal may regard that the first target obstacle is recognized incorrectly, regard that the first target obstacle does not exist in the view angle of the binocular camera at the current time, and discard the first region of interest corresponding to the first target obstacle.
Correspondingly, if the vehicle-mounted terminal detects the second interest corresponding to the target obstacle only in the second environment image and does not detect the first interest area corresponding to the target obstacle in the first environment image, the vehicle-mounted terminal discards the second interest area and determines that the target obstacle is not detected. That is, after the in-vehicle terminal detects the second environment image, the in-vehicle terminal detects the region of interest corresponding to the first target obstacle in the second environment image, but after the in-vehicle terminal detects the first environment image, the same region of interest corresponding to the first target obstacle is not detected in the first environment image, which indicates that the first target obstacle is captured by the second camera but is not captured by the first camera, and the in-vehicle terminal may also recognize the first target obstacle incorrectly, consider that the first target obstacle does not exist in the view angle of the binocular camera at the current time, and discard the second region of interest corresponding to the first target obstacle.
Optionally, the in-vehicle terminal combines a corresponding first region of interest of the same target obstacle in the first environment image and a corresponding second region of interest in the second environment image to obtain interest sets (I _ leftROI, I _ right roi) describing the first region of interest and the second region of interest, and performs step 304 for each pair of interest sets (I _ leftROI, I _ right roi).
Step 305, respectively extracting a first feature point of the first region of interest and a second feature point of the second region of interest.
Optionally, the vehicle-mounted terminal extracts each first feature point of the target obstacle in the first region of interest and each second feature point of the target obstacle in the second region of interest respectively through the feature point detection model; matching the image coordinates of each first characteristic point and each second characteristic point to obtain each matching point pair; and carrying out epipolar constraint processing on each matching point pair, and acquiring each matching point pair meeting the epipolar constraint as each characteristic point pair. Wherein each feature point pair includes an image coordinate in the first environment image and an image coordinate in the second environment image of the same point in space.
The vehicle-mounted terminal detects the feature points of the first interest area of the first environment image according to the feature point detection model to obtain each first feature point, detects the feature points of the second interest area of the second environment image to obtain each second feature point, and matches the image coordinates of each first feature point and each second feature point to obtain each matching point pair.
And step 306, matching the first characteristic points with the second characteristic points to obtain a plurality of matched characteristic point pairs.
In a possible implementation manner, the feature point detection model may obtain feature points based on an optical flow tracking method, where for the first region of interest I _ left roi, a FAST feature point detection method may be used to obtain a plurality of feature points from the first region of interest I _ left roi, to obtain a first feature point set P _ left, and a FAST feature point detection method is used to obtain a plurality of corresponding feature points in the second region of interest I _ right roi, to obtain a second feature point set P _ left, and each point coordinate of the first feature point set P _ left and the second feature point set P _ left corresponds to each other and is also matched with each other, that is, each matching point pair may be composed of the first feature point set P _ left and the second feature point set P _ left, and the feature point detection model screens each matching point pair according to an epipolar constraint, to obtain each matching point pair satisfying an epipolar constraint, and each matching point pair satisfying the epipolar constraint is used as the first region of interest and the second environment image satisfying the epipolar constraint Of the second region of interest, respectively.
In a possible implementation manner, the above feature point detection model may be based on using any one of an orb (ordered brief) algorithm, a SIFT algorithm, a SURF algorithm, or the like as a feature point detection method. The feature point detection model adopting the above methods can respectively calculate the first descriptor of each first feature point and the second descriptor of each second feature point in the detection process; and matching according to the first descriptors of the first characteristic points and the second descriptors of the second characteristic points to obtain matching point pairs.
Optionally, when the first descriptors of the first feature points and the second descriptors of the second feature points are matched, the feature points may be matched one by using a KD tree-based fast matching or brute force matching method, and the matching point pairs are screened according to epipolar constraint to obtain matching point pairs satisfying the epipolar constraint, and the matching point pairs satisfying the epipolar constraint are used as feature point pairs formed between the first region of interest of the first environment image and the second region of interest of the second environment image.
And 307, calculating three-dimensional space coordinates of the target feature point pair in a camera coordinate system according to the image coordinates of the first feature point and the second feature point contained in the target feature point pair and the camera parameters of the first camera and the second camera, wherein the target feature point pair is any one of a plurality of matched feature point pairs.
The camera coordinate system is the coordinate system of any one of the first camera and the second camera.
Optionally, the vehicle-mounted terminal may obtain image coordinates of the first feature point and the second feature point included in each feature point pair according to the obtained feature point pairs, and calculate three-dimensional space coordinates of each feature point pair in the camera coordinate system according to the image coordinates of the first feature point and the second feature point included in each feature point pair and the camera parameters of the first camera and the second camera.
For example, for a target feature point pair (PL1, PR1), where PL1 represents the image coordinates of a first feature point formed by a P point in space on a first environment image, PR1 represents the image coordinates of a second feature point formed by the P point in space on the first environment image, and the vehicle-mounted terminal calculates the three-dimensional space coordinates in the camera coordinate system for a point a in space corresponding to the target feature point pair by means of triangular reconstruction, optionally, the camera parameters of the first camera and the second camera may be the optical center positions of the cameras of the first camera and the second camera, the distance between the optical centers of the cameras of the first camera and the second camera, and the camera focal length of the binocular camera.
For example, please refer to FIG. 5, which illustrates a space chair according to an exemplary embodiment of the present applicationA schematic diagram of a target relational architecture. As shown in fig. 5, a spatial point P501, a first image coordinate system 502 of the first camera, and a second image coordinate system 503 of the second camera are included, a projection point of the P point in the first environment image is a PL point 504 (which is a first feature point), and a projection point of the P point in the second environment image is a PR point 505 (which is a second feature point). Wherein, OLIs the optical center position of the first camera, ORIt is the optical center position of the second camera, and in fig. 5, it is assumed that the optical center of the first camera is m away from the optical center of the second camera, the distance from the spatial point P to the first connection line is Z, and the first connection line is the connection line between the optical center of the first camera and the optical center of the second camera.
As shown in fig. 5, if the image coordinates of the PL point in the first environment image captured by the first camera are (XL, Y), and the image coordinates of the PR point in the second environment image captured by the second camera are (XR, Y), then the PL-PR distance is PLPR ═ m- (XL-S/2) - (S/2-XR) ═ m- (XL-XR), where S is the size of the sensor, and the vehicle-mounted terminal may be similar to the triangle P-PL-PR and the triangle P-OL-OR: and PLPR/(Z-f) ═ m/Z, and can obtain Z/(XL-XR), wherein m is the focal length of the camera of the binocular camera, and the depth information of the point P relative to the vehicle can be obtained by calculating Z.
Optionally, in this application, the camera coordinate system includes a first coordinate axis, a second coordinate axis and a third coordinate axis, and the third coordinate axis is perpendicular to a plane formed by the first coordinate axis and the second coordinate axis. As can be seen from fig. 5, the directions of the two coordinate axes of the image coordinate system corresponding to the first environment image are also the same as the directions of the two coordinate axes of the image coordinate system corresponding to the second environment image, that is, the directions of the first coordinate axis, the second coordinate axis, the two coordinate axes of the image coordinate system corresponding to the first environment image, and the two coordinate axes of the image coordinate system corresponding to the second environment image are the same as each other. In the first environment image, the coordinates of the optical center in the image coordinate system are (X0, Y0), and then the X value of the corresponding first coordinate axis of the space point P in the camera coordinate system, and the Y value of the second coordinate axis have the following relationships with Z, the camera focal length, and the optical center coordinate: (XL-X0)/f ═ X/Z, (YL-Y0)/f ═ Y/Z, i.e., X ═ XL-X0 ═ Z/f, Y ═ Y0 ═ Z/f. Through the method, the three-dimensional space coordinates of each feature point in the camera coordinate system can be obtained. In one manner, each pair of interest sets (I _ leftROI, I _ right roi) corresponds to a set of feature point pairs, and the three-dimensional space coordinates of the feature point pairs of the set in the camera coordinate system may describe the position of the target obstacle corresponding to the region of interest, that is, the target obstacle may be represented by the space coordinates of the feature points of the target obstacle.
And 308, determining the three-dimensional space coordinates of the target obstacle in the camera coordinate system according to the three-dimensional space coordinates of the characteristic points in the camera coordinate system.
Optionally, the vehicle-mounted terminal determines the three-dimensional space coordinate of the target obstacle in the camera coordinate system by using the characteristic points to determine the three-dimensional space coordinate of the target obstacle in the camera coordinate system in a median manner. That is, for any one region of interest, all feature point pairs extracted from the region of interest are selected from the region of interest, and the application may consider that the feature points are from an identified target obstacle, and for the acquired spatial point set, the spatial point P _ mean obtained by taking the median of the first coordinate axis, the second coordinate axis and the third coordinate axis respectively is considered as the spatial coordinate of the obstacle in the left-eye camera coordinate system.
For example, for the first target obstacle, the corresponding spatial point set obtained by the above calculation for the first target obstacle is (P1, P2, … …, PN), (P1, P2, … …, PN) each point of which is a point on the first target obstacle, and a spatial point P _ mean obtained by taking the median number in each of the first axis, the second axis, and the third axis is taken as the spatial coordinate of the first target obstacle in the camera coordinate system. Optionally, the vehicle-mounted terminal may also calculate a barycentric position of the first target obstacle based on the obtained spatial point set, and use the barycentric position as a spatial coordinate of the first target obstacle in the camera coordinate system. Alternatively, the in-vehicle terminal may also set, as the spatial coordinates of the first target obstacle in the camera coordinate system, a point closest to the binocular camera in the obtained spatial point set.
In summary, a first environment image is acquired through the first camera, and a second environment image is acquired through the second camera; performing obstacle detection on the first environment image and the second environment image to obtain a first interested area of the target obstacle in the first environment image and a second interested area of the second environment image; respectively extracting a first characteristic point of the first interested area and a second characteristic point of the second interested area, and matching the first characteristic point with the second characteristic point to obtain a plurality of matched characteristic point pairs; and determining the three-dimensional space coordinates of the target obstacle in a camera coordinate system according to the characteristic point pairs, wherein the camera coordinate system is the coordinate system of any one of the first camera and the second camera. According to the method and the device, the interesting regions in the environment images are identified, and the characteristic point pairs are obtained through the interesting regions, so that the three-dimensional space coordinates of the target obstacle in the camera coordinate system are determined, other pixel points except the interesting regions in the first environment image and the second environment image do not need to be calculated, the data volume needing to be calculated in the process of determining the target obstacle is reduced, and the efficiency of detecting the target obstacle is improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 6, a block diagram of an obstacle detection apparatus according to an exemplary embodiment of the present application is shown, where the obstacle detection apparatus 600 may be applied to a control terminal in a vehicle, where the vehicle further includes a binocular camera, where the binocular camera includes a first camera and a second camera, and the obstacle detection apparatus includes:
the first acquisition module 601 is configured to acquire a first environment image through the first camera;
a second collecting module 602, configured to collect a second environment image through the second camera;
an area obtaining module 603, configured to perform obstacle detection on the first environment image and the second environment image respectively, so as to obtain a first area of interest of a target obstacle in the first environment image and a second area of interest of the second environment image;
a feature point extracting module 604, configured to extract a first feature point of the first region of interest and a second feature point of the second region of interest respectively;
a feature point matching module 605, configured to match the first feature point with the second feature point to obtain a plurality of matched feature point pairs;
an obstacle determining module 606, configured to determine, according to each of the feature point pairs, a three-dimensional space coordinate of the target obstacle in a camera coordinate system, where the camera coordinate system is a coordinate system of any one of the first camera and the second camera.
In summary, a first environment image is acquired through the first camera, and a second environment image is acquired through the second camera; performing obstacle detection on the first environment image and the second environment image to obtain a first interested area of the target obstacle in the first environment image and a second interested area of the second environment image; respectively extracting a first characteristic point of the first interested area and a second characteristic point of the second interested area, and matching the first characteristic point with the second characteristic point to obtain a plurality of matched characteristic point pairs; and determining the three-dimensional space coordinates of the target obstacle in a camera coordinate system according to the characteristic point pairs, wherein the camera coordinate system is the coordinate system of any one of the first camera and the second camera. According to the method and the device, the interesting regions in the environment images are identified, and the characteristic point pairs are obtained through the interesting regions, so that the three-dimensional space coordinates of the target obstacle in the camera coordinate system are determined, other pixel points except the interesting regions in the first environment image and the second environment image do not need to be calculated, the data volume needing to be calculated in the process of determining the target obstacle is reduced, and the efficiency of detecting the target obstacle is improved.
Optionally, the apparatus further comprises:
the category acquisition module is used for acquiring category information of the first region of interest and the second region of interest respectively, wherein the category information is used for indicating a category of an obstacle corresponding to the region of interest;
a first execution module, configured to, when a similarity between respective category information of the first region of interest and the second region of interest is greater than a first threshold, execute the steps of extracting a first feature point of the first region of interest and a second feature point of the second region of interest, and matching the first feature point and the second feature point to obtain a plurality of matched feature point pairs.
Optionally, the apparatus further comprises: either the first determination module or the second determination module,
the first determining module is configured to discard the first region of interest and determine that the target obstacle is not detected if only the first region of interest corresponding to the target obstacle is detected in the first environment image and the second region of interest corresponding to the target obstacle is not detected in the second environment image;
the second determining module is configured to discard the second region of interest and determine that the target obstacle is not detected if only the second region of interest corresponding to the target obstacle is detected in the second environment image and the first region of interest corresponding to the target obstacle is not detected in the first environment image.
Optionally, the feature point matching module 605 includes: the device comprises a feature extraction unit, a feature matching unit and a feature point acquisition unit;
the feature extraction unit is configured to extract, by using a feature point detection model, each first feature point of the obstacle in the first region of interest and each second feature point of the obstacle in the second region of interest, respectively;
the feature matching unit is configured to match image coordinates of each first feature point and each second feature point to obtain each matching point pair;
and the characteristic point acquisition unit is used for carrying out epipolar constraint processing on each matching point pair and acquiring each matching point pair meeting the epipolar constraint as each characteristic point pair.
Optionally, the feature matching unit is configured to,
respectively calculating a first descriptor of each first characteristic point and a second descriptor of each second characteristic point;
and matching according to the first descriptor of each first characteristic point and the second descriptor of each second characteristic point to obtain each matching point pair.
Optionally, the obstacle determining module 606 includes; a coordinate calculation unit and a coordinate determination unit;
the coordinate calculation unit is configured to calculate three-dimensional space coordinates of the target feature point pair in the camera coordinate system according to image coordinates of a first feature point and a second feature point included in the target feature point pair, and camera parameters of the first camera and the second camera, where the target feature point pair is any one of the plurality of matched feature point pairs;
and the coordinate determination unit is used for determining the three-dimensional space coordinates of the target obstacle in the camera coordinate system according to the three-dimensional space coordinates of each feature point pair in the camera coordinate system.
Optionally, the apparatus further comprises:
and the image correction module is used for correcting the first environment image and the second environment image according to the internal parameters and the external parameters of the binocular camera, so that the coordinates of the vertical coordinates of any point in the space in the first environment image and the second environment image are the same.
The embodiment of the application also discloses a vehicle, which comprises a control terminal, wherein the control terminal comprises a memory and a processor, the memory stores a computer program, and when the computer program is executed by the processor, the processor is enabled to realize the obstacle detection method in the embodiment of the method. Alternatively, the vehicle may be a vehicle, the control terminal may be an on-board terminal in the vehicle, and the vehicle may also be a vehicle with a flight function.
The embodiment of the application also discloses a computer readable storage medium which stores a computer program, wherein the computer program realizes the method in the embodiment of the method when being executed by a processor.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present application, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, may be embodied in the form of a software product, stored in a memory, including several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The method, the apparatus, the vehicle and the storage medium for detecting the obstacle disclosed in the embodiments of the present application are described above by way of example, and a principle and an implementation of the present application are described herein by applying an example, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An obstacle detection method is applied to a control terminal in a vehicle, the vehicle further comprises a binocular camera, the binocular camera comprises a first camera and a second camera, and the method comprises the following steps:
acquiring a first environment image through the first camera;
acquiring a second environment image through the second camera;
respectively carrying out obstacle detection on the first environment image and the second environment image to obtain a first interested area of a target obstacle in the first environment image and a second interested area of the second environment image;
respectively extracting a first characteristic point of the first interested area and a second characteristic point of the second interested area;
matching the first characteristic points with the second characteristic points to obtain a plurality of matched characteristic point pairs;
and determining the three-dimensional space coordinates of the target obstacle in a camera coordinate system according to each characteristic point pair, wherein the camera coordinate system is the coordinate system of any one of the first camera and the second camera.
2. The method of claim 1, further comprising:
acquiring respective category information of the first region of interest and the second region of interest, wherein the category information is used for indicating the category of an obstacle corresponding to the region of interest;
and when the similarity between the respective category information of the first region of interest and the second region of interest is greater than a first threshold value, performing the steps of extracting a first feature point of the first region of interest and a second feature point of the second region of interest respectively, and matching the first feature point with the second feature point to obtain a plurality of matched feature point pairs.
3. The method of claim 1, further comprising:
if a first interest area corresponding to the target obstacle is detected only in the first environment image and a second interest area corresponding to the target obstacle is not detected in the second environment image, discarding the first interest area and determining that the target obstacle is not detected; or
If the second interest corresponding to the target obstacle is detected only in the second environment image and the first interest area corresponding to the target obstacle is not detected in the first environment image, discarding the second interest area and determining that the target obstacle is not detected.
4. The method of claim 1, wherein matching the first feature point with the second feature point to obtain a plurality of matched feature point pairs comprises:
matching the image coordinates of the first characteristic points and the second characteristic points to obtain matching point pairs;
and carrying out epipolar constraint processing on each matching point pair, and acquiring each matching point pair meeting the epipolar constraint as each characteristic point pair.
5. The method according to claim 4, wherein the obtaining of each matching point pair according to the matching of the image coordinates of each first feature point and each second feature point comprises:
respectively calculating a first descriptor of each first characteristic point and a second descriptor of each second characteristic point;
and matching according to the first descriptor of each first characteristic point and the second descriptor of each second characteristic point to obtain each matching point pair.
6. The method according to any one of claims 1 to 5, wherein said determining three-dimensional space coordinates of said target obstacle in a camera coordinate system from each of said pairs of feature points comprises;
calculating three-dimensional space coordinates of a target characteristic point pair in the camera coordinate system according to image coordinates of a first characteristic point and a second characteristic point contained in the target characteristic point pair and camera parameters of the first camera and the second camera, wherein the target characteristic point pair is any one of the matched characteristic point pairs;
and determining the three-dimensional space coordinates of the target obstacle in the camera coordinate system according to the three-dimensional space coordinates of each feature point pair in the camera coordinate system.
7. The method of any of claims 1 to 5, further comprising:
and correcting the first environment image and the second environment image according to the internal parameters and the external parameters of the binocular camera, so that the coordinates of the vertical coordinates of any point in the space in the first environment image and the second environment image are the same.
8. The utility model provides an obstacle detection device, its characterized in that is applied to the control terminal in the vehicle, the vehicle still includes binocular camera, binocular camera includes first camera and second camera, the device includes:
the first acquisition module is used for acquiring a first environment image through the first camera;
the second acquisition module is used for acquiring a second environment image through the second camera;
the area acquisition module is used for respectively carrying out obstacle detection on the first environment image and the second environment image to obtain a first interested area of a target obstacle in the first environment image and a second interested area of the second environment image;
the characteristic point extraction module is used for respectively extracting a first characteristic point of the first interested area and a second characteristic point of the second interested area;
the characteristic point matching module is used for matching the first characteristic point with the second characteristic point to obtain a plurality of matched characteristic point pairs;
and the obstacle determining module is used for determining the three-dimensional space coordinates of the target obstacle in a camera coordinate system according to the characteristic point pairs, wherein the camera coordinate system is the coordinate system of any one of the first camera and the second camera.
9. A vehicle, characterized in that the vehicle comprises a control terminal comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the position determination method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the position determination method according to any one of claims 1 to 7.
CN202110795686.8A 2021-07-14 2021-07-14 Obstacle detection method, obstacle detection device, vehicle and storage medium Pending CN113537047A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110795686.8A CN113537047A (en) 2021-07-14 2021-07-14 Obstacle detection method, obstacle detection device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110795686.8A CN113537047A (en) 2021-07-14 2021-07-14 Obstacle detection method, obstacle detection device, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN113537047A true CN113537047A (en) 2021-10-22

Family

ID=78127902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110795686.8A Pending CN113537047A (en) 2021-07-14 2021-07-14 Obstacle detection method, obstacle detection device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN113537047A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114384568A (en) * 2021-12-29 2022-04-22 达闼机器人有限公司 Position measuring method and device based on mobile camera, processing equipment and medium
CN115049822A (en) * 2022-05-26 2022-09-13 中国科学院半导体研究所 Three-dimensional imaging method and device
CN115063772A (en) * 2022-05-09 2022-09-16 厦门金龙联合汽车工业有限公司 Vehicle formation rear vehicle detection method, terminal device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130093887A1 (en) * 2011-10-13 2013-04-18 Altek Autotronics Corp. Obstacle Detection System and Obstacle Detection Method Thereof
CN111160302A (en) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 Obstacle information identification method and device based on automatic driving environment
CN111179300A (en) * 2019-12-16 2020-05-19 新奇点企业管理集团有限公司 Method, apparatus, system, device and storage medium for obstacle detection
CN112651359A (en) * 2020-12-30 2021-04-13 深兰科技(上海)有限公司 Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130093887A1 (en) * 2011-10-13 2013-04-18 Altek Autotronics Corp. Obstacle Detection System and Obstacle Detection Method Thereof
CN111179300A (en) * 2019-12-16 2020-05-19 新奇点企业管理集团有限公司 Method, apparatus, system, device and storage medium for obstacle detection
CN111160302A (en) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 Obstacle information identification method and device based on automatic driving environment
CN112651359A (en) * 2020-12-30 2021-04-13 深兰科技(上海)有限公司 Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
公安部第三研究所, 东南大学出版社 *
刘昱岗等: "基于双目立体视觉的倒车环境障碍物测量方法", 《交通运输系统工程与信息》 *
郭宝龙: "《数字图像处理系统工程导论》", 30 July 2012 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114384568A (en) * 2021-12-29 2022-04-22 达闼机器人有限公司 Position measuring method and device based on mobile camera, processing equipment and medium
CN115063772A (en) * 2022-05-09 2022-09-16 厦门金龙联合汽车工业有限公司 Vehicle formation rear vehicle detection method, terminal device and storage medium
CN115063772B (en) * 2022-05-09 2024-04-16 厦门金龙联合汽车工业有限公司 Method for detecting vehicles after formation of vehicles, terminal equipment and storage medium
CN115049822A (en) * 2022-05-26 2022-09-13 中国科学院半导体研究所 Three-dimensional imaging method and device

Similar Documents

Publication Publication Date Title
Simonelli et al. Disentangling monocular 3d object detection: From single to multi-class recognition
CN113537047A (en) Obstacle detection method, obstacle detection device, vehicle and storage medium
CN115035338A (en) Method and system for vehicle localization from camera images
CN111326023A (en) Unmanned aerial vehicle route early warning method, device, equipment and storage medium
US8503730B2 (en) System and method of extracting plane features
CN105006175B (en) The method and system of the movement of initiative recognition traffic participant and corresponding motor vehicle
CN110910453A (en) Vehicle pose estimation method and system based on non-overlapping view field multi-camera system
WO2019127518A1 (en) Obstacle avoidance method and device and movable platform
Palaniappan et al. Moving object detection for vehicle tracking in wide area motion imagery using 4d filtering
CN113240734B (en) Vehicle cross-position judging method, device, equipment and medium based on aerial view
CN111488812A (en) Obstacle position recognition method and device, computer equipment and storage medium
CN110969064A (en) Image detection method and device based on monocular vision and storage equipment
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
US11842440B2 (en) Landmark location reconstruction in autonomous machine applications
CN116740668B (en) Three-dimensional object detection method, three-dimensional object detection device, computer equipment and storage medium
CN116310673A (en) Three-dimensional target detection method based on fusion of point cloud and image features
CN113724335B (en) Three-dimensional target positioning method and system based on monocular camera
CN110197104B (en) Distance measurement method and device based on vehicle
CN114648639B (en) Target vehicle detection method, system and device
CN116012609A (en) Multi-target tracking method, device, electronic equipment and medium for looking around fish eyes
Xiong et al. A 3d estimation of structural road surface based on lane-line information
CN113011212B (en) Image recognition method and device and vehicle
CN114119757A (en) Image processing method, apparatus, device, medium, and computer program product
Klappstein Optical-flow based detection of moving objects in traffic scenes
Cui et al. Multi-model traffic scene simulation with road image sequences and GIS information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination