CN113298044A - Obstacle detection method, system, device and storage medium based on positioning compensation - Google Patents

Obstacle detection method, system, device and storage medium based on positioning compensation Download PDF

Info

Publication number
CN113298044A
CN113298044A CN202110698851.8A CN202110698851A CN113298044A CN 113298044 A CN113298044 A CN 113298044A CN 202110698851 A CN202110698851 A CN 202110698851A CN 113298044 A CN113298044 A CN 113298044A
Authority
CN
China
Prior art keywords
obstacle
image
compensation
label
labels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110698851.8A
Other languages
Chinese (zh)
Other versions
CN113298044B (en
Inventor
谭黎敏
孙作雷
饶兵兵
杨骋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xijing Technology Co ltd
Original Assignee
Shanghai Westwell Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Westwell Information Technology Co Ltd filed Critical Shanghai Westwell Information Technology Co Ltd
Priority to CN202110698851.8A priority Critical patent/CN113298044B/en
Publication of CN113298044A publication Critical patent/CN113298044A/en
Application granted granted Critical
Publication of CN113298044B publication Critical patent/CN113298044B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a system, equipment and a storage medium for detecting obstacles based on positioning compensation, wherein the method comprises the following steps: acquiring each obstacle type label obtained by identifying at least one vehicle and positioning information of the obstacle type labels; clustering the obstacle category labels and the positioning information based on the electronic map to obtain obstacle regions of the obstacles corresponding to the obstacle category labels in the electronic map; establishing a confidence compensation area of a corresponding obstacle category label for an obstacle area with the radius exceeding a preset threshold; and when the vehicle carries out neural network recognition at least based on the collected image, if the positioning information corresponding to the pixels is located in the confidence coefficient compensation area, increasing the confidence coefficient of the obstacle class labels, and outputting the obstacle class labels of the pixels after sorting the obstacle class labels. The method can compensate the confidence coefficient of the obstacle based on positioning, and particularly improves the accuracy of detecting the large obstacle in real time.

Description

Obstacle detection method, system, device and storage medium based on positioning compensation
Technical Field
The invention belongs to the field of machine vision, and particularly relates to a method, a system, equipment and a storage medium for detecting an obstacle based on positioning compensation.
Background
In recent years, with the increasing maturity of automobile driving assistance technology, various automobile assistance functions are increasingly applied to mass production automobiles. The automobile driving auxiliary technology is an indispensable technical stage for the development of automobiles from mechanization to intellectualization; the safety control system can provide safety guarantee for the driving behavior of a driver, and improves the comfort, safety and fuel economy of vehicle driving. In driving assistance technology and unmanned technology, environmental perception is an important core component thereof. The environment perception technology means that the vehicle perceives the surrounding environment through related signals of sensors such as a camera, an ultrasonic radar, a millimeter wave radar and a laser radar, and provides an important basis for control decision of the vehicle. Especially, the accurate real-time anti-collision early warning has important application significance, especially plays a decisive role in assisting driving safety warning and automatic control of automatic driving, for example, in the automatic driving, the anti-collision early warning can reduce accidents as much as possible and avoid personal and property loss; in automatic driving, the more accurate the anti-collision early warning is, the higher the safety is.
At present, 3D information such as size, position, category and orientation of an object detected by a binocular picture has important application in the fields of robots, automatic driving, lane coordination and the like. But under complex road conditions, for example: unmanned docks, etc., the volume of obstacles varies greatly, many large obstacles such as: when the unmanned vehicle is far away from the gantry crane, the complete picture of the gantry crane can be shot, so that the type of the obstacle can be easily identified by a neural network of machine vision, but the shot picture only shows the appearance of a part of the gantry crane when the unmanned vehicle is close to the gantry crane, and the gantry crane can be difficult to identify by the neural network of machine vision. Under the condition, the matching effect between the unmanned vehicle and other unmanned hoisting equipment is greatly restricted.
Accordingly, the present invention provides a method, system, device and storage medium for obstacle detection based on location compensation.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present invention and therefore may include information that does not constitute prior art known to a person of ordinary skill in the art.
Disclosure of Invention
The invention aims to provide a method, a system, equipment and a storage medium for detecting an obstacle based on positioning compensation, which overcome the difficulties in the prior art, can compensate the confidence coefficient of the obstacle based on positioning, and particularly improve the accuracy of detecting a large obstacle in real time.
The embodiment of the invention provides an obstacle detection method based on positioning compensation, which comprises the following steps:
s110, acquiring each obstacle type label acquired by at least one vehicle through identification and positioning information of the obstacle type labels;
s120, clustering the obstacle category labels and the positioning information based on the electronic map to obtain obstacle areas of the obstacles corresponding to the obstacle category labels in the electronic map;
s130, establishing a confidence compensation area of the corresponding obstacle type label for the obstacle area with the radius exceeding a preset threshold value; and
s140, when the vehicle carries out neural network identification at least based on the collected image, if the positioning information corresponding to the pixel is located in the confidence coefficient compensation area, increasing the confidence coefficient of the obstacle type label, and outputting the obstacle type label of the pixel after sorting the obstacle type labels.
Preferably, the method further comprises the following steps:
s150, clustering is carried out according to the obstacle category labels of the pixels in the image, the image is divided into a plurality of areas according to a clustering result, and the obstacle category labels of the image corresponding to the areas are obtained.
Preferably, in step S110, the image data collected by a binocular camera of the vehicle and/or the point cloud data collected by the laser radar are obtained, and all the obstacles are obtained based on the distance of the vehicle and the obstacle category labels; and obtaining the positioning information of the obstacle through real-time positioning information and posture detection based on the vehicle.
Preferably, in step S120, the obstacle category labels collected by the multiple vehicles are clustered and deduplicated based on the positioning information, so as to obtain an obstacle area of the obstacle in the electronic map.
Preferably, in step S130, a center is established for each obstacle area in the electronic map, and at least one circular confidence compensation area is established based on a preset radius range with the center as a circle center, where a value range of the preset threshold is greater than 3 meters, and a value range of the preset radius is less than 100 meters.
Preferably, in the step S140, when the positioning information corresponding to the pixel is located in the confidence compensation region, after the confidence of a type of obstacle category label corresponding to the confidence compensation region obtained by multiplying a preset magnification by the pixel is obtained, the obstacle category labels are sorted, and the obstacle category label with the highest sorting is output as the obstacle category label of the pixel, where a value range of the preset magnification is 1.5 to 5.
Preferably, the preset magnification increases as the distance between the positioning information corresponding to the pixel and the center of the obstacle region decreases.
Preferably, the step S110 includes the following steps:
s111, shooting a color image by using a binocular camera;
s112, calculating a parallax matrix according to the left image and the right image which are obtained by the binocular camera device at the same moment, obtaining a distance value of each pixel point, and generating point cloud information and a top view based on the left image and positioning information of the pixel points in an electronic map;
s113, inputting a trained machine vision model based on a left image to perform image segmentation based on the left image, and obtaining an obstacle category label, an obstacle code, a label confidence coefficient and positioning information corresponding to each segmented image area in the left image to obtain composite image information of the left image;
s114, marking the positions, distances and obstacle category labels of all obstacles in the top view; and
s115, obtaining the positioning information of the obstacle in the electronic map based on the real-time positioning information of the vehicle and the position relation between the obstacle and the vehicle in the top view.
Preferably, in step S113, the composite image information of the left image at least includes RGB values of each pixel, an obstacle category label D, an obstacle category-based code H, a label confidence T, a distance value P, and positioning information W;
in the step S114, ground information is fitted according to the parallax matrix and an included angle between the binocular camera device and the ground is obtained, a virtual camera is set according to the included angle and each point of the three-dimensional point cloud is projected as a top view, and each point in the top view has an obstacle category label based on the three-dimensional point cloud.
Preferably, the step S140 includes the following steps:
s141, shooting a color image by using a binocular imaging device;
s142, calculating a parallax matrix according to the left image and the right image which are obtained by the binocular camera device at the same moment, obtaining a distance value of each pixel point, and generating point cloud information and a top view based on the left image and positioning information of the pixel points in an electronic map;
s143, inputting a trained machine vision model based on a left image to perform image segmentation based on the left image;
s144, when point clouds corresponding to pixels in the segmented image are in the confidence coefficient compensation area, increasing the confidence coefficient of the obstacle category labels, sorting the obstacle category labels, and outputting the obstacle category labels of the pixels;
s145, after clustering and sequencing the obstacle category labels of all pixels in the segmented image, taking the obstacle category label with the highest occurrence frequency as the obstacle category label of the segmented image.
Preferably, the method further comprises the following steps:
s146, obtaining obstacle category labels, obstacle codes, label confidence degrees and positioning information corresponding to the pixels in each segmented image area in the left image, and obtaining composite image information of the left image;
s147, marking the positions, the distances and the obstacle category labels of all obstacles in the top view;
and S148, forming a path according to the plan of the top view at least.
Preferably, in step S145, the composite image information of the left image at least includes RGB values of each pixel, an obstacle class label D, an obstacle class-based code H, a label confidence T, a distance value P, and positioning information W.
The embodiment of the present invention further provides a positioning compensation-based obstacle detection system, which is used for implementing the positioning compensation-based obstacle detection method, and the positioning compensation-based obstacle detection system includes:
the acquisition module is used for acquiring each obstacle type label acquired by at least one vehicle through identification and positioning information of the obstacle type labels;
the clustering module is used for clustering the obstacle category labels and the positioning information based on the electronic map to obtain obstacle areas of the obstacles corresponding to the obstacle category labels in the electronic map;
the compensation module is used for establishing a corresponding confidence coefficient compensation area of the obstacle type label for the obstacle area with the radius exceeding a preset threshold value; and
and the recognition module is used for increasing the confidence coefficient of the obstacle type labels if the positioning information corresponding to the pixels is positioned in the confidence coefficient compensation area when the vehicle performs neural network recognition at least based on the collected image, and outputting the obstacle type labels of the pixels after sorting the obstacle type labels.
An embodiment of the present invention further provides an obstacle detection apparatus based on positioning compensation, including:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the above positioning compensation based obstacle detection method via execution of executable instructions.
Embodiments of the present invention also provide a computer-readable storage medium storing a program, which when executed, implements the steps of the above-mentioned obstacle detection method based on location compensation.
The obstacle detection method, the system, the equipment and the storage medium based on the positioning compensation can compensate the confidence coefficient of the obstacle based on the positioning, and particularly improve the accuracy of detecting the large obstacle in real time.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, with reference to the accompanying drawings.
Fig. 1 is a flowchart of the obstacle detection method based on location compensation of the present invention.
Fig. 2 to 6 are schematic diagrams illustrating an implementation process of the obstacle detection method based on positioning compensation according to the present invention.
FIG. 7 is a schematic structural diagram of an obstacle detection system based on positioning compensation according to the present invention
Fig. 8 is a schematic structural diagram of the obstacle detecting apparatus based on location compensation of the present invention. And
fig. 9 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Reference numerals
11 first vehicle
12 second vehicle
13 third vehicle
14 fourth vehicle
2-door hanger
21 confidence compensation region
22 pixel point
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus their repetitive description will be omitted.
Fig. 1 is a flowchart of the obstacle detection method based on location compensation of the present invention. As shown in fig. 1, the obstacle detection method based on positioning compensation of the present invention includes the following steps:
s110, acquiring each obstacle type label obtained by identifying at least one vehicle and positioning information of the obstacle type labels.
And S120, clustering the obstacle type labels and the positioning information based on the electronic map to obtain the obstacle areas of the obstacles corresponding to the obstacle type labels in the electronic map.
And S130, establishing a corresponding confidence compensation area of the obstacle type label for the obstacle area with the radius exceeding the preset threshold value. And
and S140, when the vehicle carries out neural network identification at least based on the collected image, if the positioning information corresponding to the pixel is positioned in the confidence coefficient compensation area, increasing the confidence coefficient of the obstacle type label, sorting the obstacle type labels, and outputting the obstacle type label of the pixel.
In a preferred embodiment, the method further comprises the following steps:
s150, clustering is carried out according to the obstacle category labels of the pixels in the image, the image is divided into a plurality of areas according to a clustering result, and the obstacle category labels of the image corresponding to the areas are obtained.
In a preferred embodiment, in step S110, all obstacles are obtained by using image data collected by a binocular camera of the vehicle and/or point cloud data collected by a laser radar, and the distance of the vehicle and the obstacle category label are used as the basis. And obtaining the positioning information of the obstacle through real-time positioning information and posture detection based on the vehicle.
In a preferred embodiment, in step S120, the obstacle category labels collected by the multiple vehicles are clustered and deduplicated based on the positioning information, so as to obtain the obstacle area of the obstacle in the electronic map.
In a preferred embodiment, in the step S130, a center is established for each obstacle area in the electronic map, and at least one circular confidence compensation area is established based on a preset radius range with the center as a center of circle, where a value range of the preset threshold is greater than 3 meters, and a value range of the preset radius is less than 100 meters, but not limited thereto. Through the filtering of the preset threshold, a large obstacle (that is, an obstacle which is not accurately recognized due to a change in the angle of view) can be accurately recognized through the compensation of the confidence, and the recognition accuracy of a small obstacle is not affected by the change in the angle of view, and the confidence is not adjusted in the embodiment.
In a preferred embodiment, in the step S140, when the positioning information corresponding to the pixel is located in the confidence compensation region, the confidence of a type of obstacle category label corresponding to the confidence compensation region obtained by multiplying a preset magnification by the pixel is obtained, then the obstacle category labels are sorted, and the obstacle category label with the highest rank is output as the obstacle category label of the pixel, where a value range of the preset magnification is 1.5 to 5.
In a variation, the confidence compensation area of the concentric circle composed of at least multiple layers of rings may be established based on different radii, each layer of rings corresponds to a different preset multiplying factor according to the distance from the center of the circle, and the closer to the center of the circle, the higher the preset multiplying factor.
In a preferred embodiment, the preset magnification is increased as the distance between the positioning information corresponding to the pixel and the center of the obstacle area decreases.
In a preferred embodiment, the step S110 includes the following steps:
and S111, shooting a color image by using a binocular imaging device.
And S112, calculating a parallax matrix according to the left image and the right image obtained by the binocular camera device at the same moment, obtaining a distance value of each pixel point, and generating point cloud information and a top view based on the left image and positioning information of the pixel points in the electronic map.
And S113, inputting a trained machine vision model based on a left image, performing image segmentation based on the left image, obtaining an obstacle type label, an obstacle code, a label confidence coefficient and positioning information corresponding to each segmented image area in the left image, and obtaining composite image information of the left image.
And S114, marking the positions, distances and obstacle type labels of all obstacles in the plan view. And
and S115, acquiring the positioning information of the obstacle in the electronic map based on the real-time positioning information of the vehicle and the position relation between the obstacle and the vehicle in the top view.
In a preferred embodiment, in step S113, the composite image information of the left image at least includes RGB values of each pixel, an obstacle class label D, an obstacle class-based code H, a label confidence T, a distance value P, and positioning information W.
In the step S114, ground information is fitted according to the parallax matrix, an included angle between the binocular camera device and the ground is obtained, a virtual camera is set according to the included angle, and each point of the three-dimensional point cloud is projected as a top view, where each point in the top view has an obstacle category label based on the three-dimensional point cloud.
In a preferred embodiment, the step S140 includes the following steps:
and S141, shooting a color image by using a binocular imaging device.
And S142, calculating a parallax matrix according to the left image and the right image obtained by the binocular camera device at the same moment, obtaining a distance value of each pixel point, and generating point cloud information and a top view based on the left image and positioning information of the pixel points in the electronic map.
And S143, inputting the trained machine vision model based on the left image, and performing image segmentation based on the left image.
And S144, when the point cloud corresponding to the pixel in the segmented image is in the confidence coefficient compensation area, increasing the confidence coefficient of the obstacle class label, sequencing the obstacle class labels, and outputting the obstacle class label of the pixel.
And S145, after clustering and sorting the obstacle category labels of all pixels in the segmented image, taking the obstacle category label with the highest occurrence frequency as the obstacle category label of the segmented image.
In a preferred embodiment, further comprising:
and S146, obtaining the obstacle category label, the obstacle code, the label confidence coefficient and the positioning information corresponding to the pixel in each segmented image area in the left image, and obtaining the composite image information of the left image.
S147, marking the positions, distances, and obstacle type labels of all obstacles in the plan view.
And S148, planning and forming a path at least according to the top view.
In a preferred embodiment, in the step S145, the composite image information of the left image at least includes RGB values of each pixel, an obstacle class label D, an obstacle class-based code H, a label confidence T, a distance value P, and positioning information W.
Fig. 2 to 6 are schematic diagrams illustrating an implementation process of the obstacle detection method based on positioning compensation according to the present invention. As shown in fig. 2 to 6, the implementation of the present invention is as follows:
referring to fig. 2, first, environment data (including visual data, radar point cloud data, and the like) collected by the unmanned vehicles 11, 12, 13 traveling on the dock is identified to obtain each obstacle category tag and location information of the above obstacle category tags. In this embodiment, a large gantry crane 2 is taken as an example. The method comprises the steps that image data collected through a binocular camera device of a vehicle and/or point cloud data collected through a laser radar are obtained, and all obstacles are based on the distance of the vehicle and the obstacle category labels. The positioning information of the gantry crane 2 is obtained by real-time positioning information and attitude detection based on the vehicle. The process at this stage includes: the unmanned vehicles 11, 12, 13 each capture a color image using a binocular imaging device. And calculating a parallax matrix according to the binocular camera device based on the left image and the right image obtained at the same moment, obtaining a distance value of each pixel point, and generating point cloud information and a top view based on the left image and positioning information of the pixel points in the electronic map. Inputting a trained machine vision model based on a left image to perform image segmentation based on the left image, and obtaining an obstacle class label, an obstacle code, a label confidence coefficient and positioning information corresponding to each segmented image area in the left image to obtain composite image information of the left image. The composite image information of the left image at least comprises an RGB value of each pixel, an obstacle class label D, an obstacle class-based code H, a label confidence T, a distance value P and positioning information W. Each unmanned vehicle indicates the position, distance, and obstacle category label (gantry) of all obstacles in the above-described plan view. And fitting ground information according to the parallax matrix and obtaining an included angle between the binocular camera device and the ground, setting a virtual camera according to the included angle and projecting each point of the three-dimensional point cloud into a top view, wherein each point in the top view has an obstacle category label based on the three-dimensional point cloud. And obtaining the positioning information of the gantry crane 2 in the electronic map based on the real-time positioning information of the vehicle and the position relation between the obstacle and the vehicle in the top view.
Referring to fig. 3, then, the obstacle category labels and the positioning information are clustered based on the electronic map, and obstacle regions of the obstacles in the electronic map corresponding to the obstacle category labels are obtained. The more information obtained by the vehicles is aggregated, the more accuracy and real-time performance can be improved. And establishing a corresponding confidence compensation area of the obstacle type label for the obstacle area with the radius exceeding the preset threshold value. And clustering and de-weighting the obstacle category labels acquired by the multiple unmanned vehicles 11, 12 and 13 based on the positioning information to obtain the obstacle area of the gantry crane 2 in the electronic map. The method comprises the steps of establishing a center for an obstacle area in an electronic map, establishing at least one circular confidence compensation area 21 based on a preset radius range by taking the center as a circle center, wherein the value range of the preset radius is less than 100 meters. The preset magnification increases as the distance between the positioning information corresponding to the pixel and the center of the obstacle area decreases.
Referring to fig. 4, when the drone 14 is traveling toward the gantry 2, initially due to the large distance, the drone 14 can capture or scan a large portion of the image or outline of the gantry 2, relatively easily identifying the gantry 2.
Referring to fig. 5, but as the unmanned vehicle 14 approaches the gantry crane, only a small portion of the image or contour of the gantry crane 2 can be captured or scanned, and the large object is approached, which reduces the recognition accuracy in the prior art, for example, in the prior art, for a part of pixels 22, the confidence of the wall is 0.8, and the confidence of the gantry crane is 0.6, the part of pixels can be considered as the wall, which is obviously an error.
Referring to fig. 6, the detection method of the present invention at this time is: when the unmanned vehicle 14 performs neural network recognition based on at least the acquired image, if the positioning information corresponding to the pixel is located in the confidence degree compensation region, the confidence degree of the obstacle type tag is increased, and after sorting the obstacle type tags, the obstacle type tag of the pixel is output. And clustering according to the obstacle category labels of the pixels in the image, dividing the image into a plurality of areas according to a clustering result, and obtaining the obstacle category labels of the image corresponding to the areas. When the positioning information corresponding to the pixel is located in the confidence compensation region, the confidence of the type of obstacle category label corresponding to the confidence compensation region obtained by multiplying the pixel by a preset multiplying factor is used, then the obstacle category labels are sorted, and the obstacle category label with the highest sorting is output as the obstacle category label of the pixel, in this embodiment, the preset multiplying factor is 1.5. The process at this stage includes: the unmanned vehicle 14 captures a color image using a binocular camera. And calculating a parallax matrix according to the binocular camera device based on the left image and the right image obtained at the same moment, obtaining a distance value of each pixel point, and generating point cloud information and a top view based on the left image and positioning information of the pixel points in the electronic map. And inputting a trained machine vision model based on the left image to perform image segmentation based on the left image. When the point cloud corresponding to a part of pixels 22 (the part of pixels with the adjusted confidence of the gantry crane category label) in the segmented image is in the confidence compensation region 21, the confidence of the obstacle category label is increased for the part of pixels 22 by multiplying a preset multiplying factor by the confidence of the gantry crane category label (for example, 0.6 × 1.5 is 0.9), then the obstacle category labels are sorted (at this time, the confidence of the wall is 0.8, and the confidence of the gantry crane is 0.9), and then the obstacle category label of the pixel is output as the gantry crane. And after clustering and sorting the obstacle category labels of all pixels in the segmented image, taking the obstacle category label (gantry crane) with the highest occurrence frequency as the obstacle category label of the segmented image. And acquiring an obstacle category label, an obstacle code, a label confidence coefficient and positioning information corresponding to the pixel in each segmented image area in the left image, and acquiring composite image information of the left image. The composite image information of the left image at least comprises an RGB value of each pixel, an obstacle class label D, an obstacle class-based code H, a label confidence T, a distance value P and positioning information W. The positions, distances, and obstacle category labels of all obstacles are indicated in the above-described plan view. Finally, a path may be formed at least according to the plan view plan described above.
It should be emphasized that the present invention is obviously different from the scheme of enhancing the identification accuracy by fixing the positioning information of the large-sized obstacle, and like a dock gantry crane and other devices, the present invention also has a certain degree of movement, the positioning information is not fixed, and the method of fixing the positioning information cannot provide effective help. The confidence coefficient compensation area of the large obstacle is accurately established through the environmental information acquired by the plurality of vehicles, and the confidence coefficient of the label of the large obstacle corresponding to the vehicle close to the confidence coefficient compensation area in the identification process is compensated (improved), so that the large obstacle can be accurately identified. And even if the large-scale barrier is moved, the large-scale barrier can be acquired by other unmanned vehicles running on the wharf, so that a corresponding confidence compensation area is adjusted on the electronic map, and the accuracy of detecting the (movable) large-scale barrier is greatly improved.
The detection method effectively avoids the technical difficulty of visual accuracy reduction caused by the reduction of the visual angle after the distance approaches, effectively ensures the identification accuracy of large obstacles by a confidence compensation method based on positioning information, and is particularly suitable for wharf and other road conditions which cannot be met.
Fig. 7 is a schematic structural diagram of the obstacle detection system based on positioning compensation of the present invention. As shown in fig. 7, an embodiment of the present invention further provides a positioning compensation based obstacle detection system 5, which is configured to implement the above positioning compensation based obstacle detection method, and includes:
the acquisition module 51 acquires each obstacle category tag obtained by identifying at least one vehicle and the positioning information of the obstacle category tag.
The clustering module 52 clusters the obstacle category label and the positioning information based on the electronic map to obtain an obstacle region of the obstacle in the electronic map corresponding to the obstacle category label.
The compensation module 53 establishes a confidence compensation area corresponding to the obstacle type tag for the obstacle area whose radius exceeds the preset threshold. And
and the identification module 54 is configured to, when the vehicle performs neural network identification based on at least the acquired image, increase the confidence of the obstacle type tag if the positioning information corresponding to the pixel is located in the confidence compensation region, sort the obstacle type tags, and output the obstacle type tag of the pixel.
The obstacle detection system based on the positioning compensation can compensate the confidence coefficient of the obstacle based on the positioning, and particularly improves the accuracy of detecting the large obstacle in real time.
The embodiment of the invention also provides obstacle detection equipment based on positioning compensation, which comprises a processor. A memory having stored therein executable instructions of the processor. Wherein the processor is configured to perform the steps of the positioning compensation based obstacle detection method via execution of executable instructions.
As described above, the obstacle detection device based on positioning compensation of the present invention can perform obstacle confidence compensation based on positioning, and particularly, improve the accuracy of detecting a large obstacle in real time.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" platform.
Fig. 8 is a schematic structural diagram of the obstacle detecting apparatus based on location compensation of the present invention. An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 8. The electronic device 600 shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 8, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 connecting the different platform components (including the memory unit 620 and the processing unit 610), a display unit 640, etc.
Wherein the storage unit stores program code executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of the present specification. For example, processing unit 610 may perform the steps as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 via the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
An embodiment of the present invention further provides a computer-readable storage medium for storing a program, where the program implements the steps of the obstacle detection method based on location compensation when executed. In some possible embodiments, the aspects of the present invention may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of this specification, when the program product is run on the terminal device.
As described above, the program of the computer-readable storage medium of this embodiment, when executed, enables the confidence of an obstacle to be compensated based on positioning, and particularly improves the accuracy of detecting a large obstacle in real time.
Fig. 9 is a schematic structural diagram of a computer-readable storage medium of the present invention. Referring to fig. 9, a program product 800 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In summary, the obstacle detection method, system, device and storage medium based on positioning compensation of the present invention can compensate the confidence of the obstacle based on positioning, and especially improve the accuracy of real-time detection of large obstacles.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (15)

1. An obstacle detection method based on positioning compensation is characterized by comprising the following steps:
s110, acquiring each obstacle type label acquired by at least one vehicle through identification and positioning information of the obstacle type labels;
s120, clustering the obstacle category labels and the positioning information based on the electronic map to obtain obstacle areas of the obstacles corresponding to the obstacle category labels in the electronic map;
s130, establishing a confidence compensation area of the corresponding obstacle type label for the obstacle area with the radius exceeding a preset threshold value; and
s140, when the vehicle carries out neural network identification at least based on the collected image, if the positioning information corresponding to the pixel is located in the confidence coefficient compensation area, increasing the confidence coefficient of the obstacle type label, and outputting the obstacle type label of the pixel after sorting the obstacle type labels.
2. The method of claim 1, further comprising the steps of:
s150, clustering is carried out according to the obstacle category labels of the pixels in the image, the image is divided into a plurality of areas according to a clustering result, and the obstacle category labels of the image corresponding to the areas are obtained.
3. The obstacle detection method based on positioning compensation according to claim 1, wherein in step S110, the obstacle detection method is obtained by image data collected by a binocular camera of a vehicle and/or point cloud data collected by a laser radar, and all obstacles are based on the distance of the vehicle and an obstacle category label; and obtaining the positioning information of the obstacle through real-time positioning information and posture detection based on the vehicle.
4. The method for detecting obstacles based on location compensation according to claim 1, wherein in step S120, the obstacle category labels collected by a plurality of vehicles are clustered and deduplicated based on the location information to obtain the obstacle area of the obstacle in the electronic map.
5. The method for detecting obstacles based on location-based compensation according to claim 1, wherein in step S130, a center is established for each obstacle area in the electronic map, and at least one circular confidence compensation area is established based on a preset radius range with the center as a circle center, wherein a value range of the preset threshold is greater than 3 meters, and a value range of the preset radius is less than 100 meters.
6. The method for detecting obstacles based on positioning compensation according to claim 5, wherein in step S140, when the positioning information corresponding to the pixel is located in the confidence compensation region, the confidence of the obstacle category label corresponding to the confidence compensation region obtained by multiplying a preset magnification by the pixel is used to perform the sorting of the obstacle category labels, and the obstacle category label with the highest sorting is output as the obstacle category label of the pixel, and the range of the preset magnification is 1.5 to 5.
7. The method of claim 6, wherein the preset magnification is increased as a distance between the positioning information corresponding to the pixel and the center of the obstacle region is decreased.
8. The method for detecting obstacles based on location compensation according to claim 1, wherein the step S110 comprises the following steps:
s111, shooting a color image by using a binocular camera;
s112, calculating a parallax matrix according to the left image and the right image which are obtained by the binocular camera device at the same moment, obtaining a distance value of each pixel point, and generating point cloud information and a top view based on the left image and positioning information of the pixel points in an electronic map;
s113, inputting a trained machine vision model based on a left image to perform image segmentation based on the left image, and obtaining an obstacle category label, an obstacle code, a label confidence coefficient and positioning information corresponding to each segmented image area in the left image to obtain composite image information of the left image;
s114, marking the positions, distances and obstacle category labels of all obstacles in the top view; and
s115, obtaining the positioning information of the obstacle in the electronic map based on the real-time positioning information of the vehicle and the position relation between the obstacle and the vehicle in the top view.
9. The method for detecting obstacles based on location compensation according to claim 8, wherein in step S113, the composite image information of the left image at least comprises RGB values of each pixel, an obstacle class label D, an obstacle class-based code H, a label confidence T, a distance value P, and location information W;
in the step S114, ground information is fitted according to the parallax matrix and an included angle between the binocular camera device and the ground is obtained, a virtual camera is set according to the included angle and each point of the three-dimensional point cloud is projected as a top view, and each point in the top view has an obstacle category label based on the three-dimensional point cloud.
10. The method for detecting obstacles based on location compensation according to claim 1, wherein said step S140 comprises the following steps:
s141, shooting a color image by using a binocular imaging device;
s142, calculating a parallax matrix according to the left image and the right image which are obtained by the binocular camera device at the same moment, obtaining a distance value of each pixel point, and generating point cloud information and a top view based on the left image and positioning information of the pixel points in an electronic map;
s143, inputting a trained machine vision model based on a left image to perform image segmentation based on the left image;
s144, when point clouds corresponding to pixels in the segmented image are in the confidence coefficient compensation area, increasing the confidence coefficient of the obstacle category labels, sorting the obstacle category labels, and outputting the obstacle category labels of the pixels;
s145, after clustering and sequencing the obstacle category labels of all pixels in the segmented image, taking the obstacle category label with the highest occurrence frequency as the obstacle category label of the segmented image.
11. The method of obstacle detection based on location compensation according to claim 10, further comprising:
s146, obtaining obstacle category labels, obstacle codes, label confidence degrees and positioning information corresponding to the pixels in each segmented image area in the left image, and obtaining composite image information of the left image;
s147, marking the positions, the distances and the obstacle category labels of all obstacles in the top view;
and S148, forming a path according to the plan of the top view at least.
12. The method for detecting obstacles based on location compensation according to claim 11, wherein in step S145, the composite image information of the left image at least comprises RGB values of each pixel, obstacle class label D, obstacle class-based code H, label confidence T, distance value P and location information W.
13. A positioning compensation based obstacle detection system for implementing the positioning compensation based obstacle detection method according to claim 1, comprising:
the acquisition module is used for acquiring each obstacle type label acquired by at least one vehicle through identification and positioning information of the obstacle type labels;
the clustering module is used for clustering the obstacle category labels and the positioning information based on the electronic map to obtain obstacle areas of the obstacles corresponding to the obstacle category labels in the electronic map;
the compensation module is used for establishing a corresponding confidence coefficient compensation area of the obstacle type label for the obstacle area with the radius exceeding a preset threshold value; and
and the recognition module is used for increasing the confidence coefficient of the obstacle type labels if the positioning information corresponding to the pixels is positioned in the confidence coefficient compensation area when the vehicle performs neural network recognition at least based on the collected image, and outputting the obstacle type labels of the pixels after sorting the obstacle type labels.
14. An obstacle detection apparatus based on positioning compensation, comprising:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the positioning compensation based obstacle detection method according to any one of claims 1 to 12 via execution of executable instructions.
15. A computer-readable storage medium storing a program, wherein the program is executed to implement the steps of the positioning compensation based obstacle detection method according to any one of claims 1 to 12.
CN202110698851.8A 2021-06-23 2021-06-23 Obstacle detection method, system, device and storage medium based on positioning compensation Active CN113298044B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110698851.8A CN113298044B (en) 2021-06-23 2021-06-23 Obstacle detection method, system, device and storage medium based on positioning compensation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110698851.8A CN113298044B (en) 2021-06-23 2021-06-23 Obstacle detection method, system, device and storage medium based on positioning compensation

Publications (2)

Publication Number Publication Date
CN113298044A true CN113298044A (en) 2021-08-24
CN113298044B CN113298044B (en) 2023-04-18

Family

ID=77329443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110698851.8A Active CN113298044B (en) 2021-06-23 2021-06-23 Obstacle detection method, system, device and storage medium based on positioning compensation

Country Status (1)

Country Link
CN (1) CN113298044B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230065284A1 (en) * 2021-09-01 2023-03-02 Baidu Usa Llc Control and planning with localization uncertainty

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108646764A (en) * 2018-07-25 2018-10-12 吉林大学 Automatic driving vehicle and control method based on fixed course, device and system
CN108779984A (en) * 2016-03-16 2018-11-09 索尼公司 Signal handling equipment and signal processing method
CN109143215A (en) * 2018-08-28 2019-01-04 重庆邮电大学 It is a kind of that source of early warning and method are cooperateed with what V2X was communicated based on binocular vision
US20190079193A1 (en) * 2017-09-13 2019-03-14 Velodyne Lidar, Inc. Multiple Resolution, Simultaneous Localization and Mapping Based On 3-D LIDAR Measurements
CN110853399A (en) * 2019-10-12 2020-02-28 惠州市德赛西威智能交通技术研究院有限公司 Parking space identification compensation method based on ultrasonic sensor parking space detection system
CN111414848A (en) * 2020-03-19 2020-07-14 深动科技(北京)有限公司 Full-class 3D obstacle detection method, system and medium
CN111583337A (en) * 2020-04-25 2020-08-25 华南理工大学 Omnibearing obstacle detection method based on multi-sensor fusion
CN111679267A (en) * 2020-08-17 2020-09-18 陕西耕辰科技有限公司 Automatic driving system and obstacle detection system thereof
CN112232275A (en) * 2020-11-03 2021-01-15 上海西井信息科技有限公司 Obstacle detection method, system, equipment and storage medium based on binocular recognition
CN112329552A (en) * 2020-10-16 2021-02-05 爱驰汽车(上海)有限公司 Obstacle detection method and device based on automobile
JP2021082293A (en) * 2019-11-21 2021-05-27 エヌビディア コーポレーション Deep neural network for detecting obstacle instances using RADAR sensors in autonomous machine applications

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108779984A (en) * 2016-03-16 2018-11-09 索尼公司 Signal handling equipment and signal processing method
US20190079193A1 (en) * 2017-09-13 2019-03-14 Velodyne Lidar, Inc. Multiple Resolution, Simultaneous Localization and Mapping Based On 3-D LIDAR Measurements
CN108646764A (en) * 2018-07-25 2018-10-12 吉林大学 Automatic driving vehicle and control method based on fixed course, device and system
CN109143215A (en) * 2018-08-28 2019-01-04 重庆邮电大学 It is a kind of that source of early warning and method are cooperateed with what V2X was communicated based on binocular vision
CN110853399A (en) * 2019-10-12 2020-02-28 惠州市德赛西威智能交通技术研究院有限公司 Parking space identification compensation method based on ultrasonic sensor parking space detection system
WO2021068378A1 (en) * 2019-10-12 2021-04-15 惠州市德赛西威智能交通技术研究院有限公司 Parking space identification and compensation method based on ultrasonic sensor parking space detection system
JP2021082293A (en) * 2019-11-21 2021-05-27 エヌビディア コーポレーション Deep neural network for detecting obstacle instances using RADAR sensors in autonomous machine applications
CN111414848A (en) * 2020-03-19 2020-07-14 深动科技(北京)有限公司 Full-class 3D obstacle detection method, system and medium
CN111583337A (en) * 2020-04-25 2020-08-25 华南理工大学 Omnibearing obstacle detection method based on multi-sensor fusion
CN111679267A (en) * 2020-08-17 2020-09-18 陕西耕辰科技有限公司 Automatic driving system and obstacle detection system thereof
CN112329552A (en) * 2020-10-16 2021-02-05 爱驰汽车(上海)有限公司 Obstacle detection method and device based on automobile
CN112232275A (en) * 2020-11-03 2021-01-15 上海西井信息科技有限公司 Obstacle detection method, system, equipment and storage medium based on binocular recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RAFAEL VIVACQUA等: "A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application", 《SENSORS》 *
孙作雷: "大规模不规则环境中的移动机器人定位与地图构建", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230065284A1 (en) * 2021-09-01 2023-03-02 Baidu Usa Llc Control and planning with localization uncertainty

Also Published As

Publication number Publication date
CN113298044B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
US11017244B2 (en) Obstacle type recognizing method and apparatus, device and storage medium
US11783568B2 (en) Object classification using extra-regional context
CA2950791C (en) Binocular visual navigation system and method based on power robot
CN111874006B (en) Route planning processing method and device
US11120280B2 (en) Geometry-aware instance segmentation in stereo image capture processes
CN112967283B (en) Target identification method, system, equipment and storage medium based on binocular camera
CN108734058B (en) Obstacle type identification method, device, equipment and storage medium
CN112740268B (en) Target detection method and device
CN111353522B (en) Method and system for determining road signs in the surroundings of a vehicle
KR20210034097A (en) Camera evaluation technologies for autonomous vehicles
CN112232275B (en) Obstacle detection method, system, equipment and storage medium based on binocular recognition
CN112232139B (en) Obstacle avoidance method based on combination of Yolo v4 and Tof algorithm
CN113469045B (en) Visual positioning method and system for unmanned integrated card, electronic equipment and storage medium
CN115147809A (en) Obstacle detection method, device, equipment and storage medium
Lu et al. Improved situation awareness for autonomous taxiing through self-learning
CN113298044B (en) Obstacle detection method, system, device and storage medium based on positioning compensation
Gökçe et al. Recognition of dynamic objects from UGVs using Interconnected Neuralnetwork-based Computer Vision system
Guo et al. Road environment perception for safe and comfortable driving
CN112639822B (en) Data processing method and device
CN117373285A (en) Risk early warning model training method, risk early warning method and automatic driving vehicle
Liu et al. Research on security of key algorithms in intelligent driving system
CN116189150B (en) Monocular 3D target detection method, device, equipment and medium based on fusion output
Wang et al. Holistic Parking Slot Detection with Polygon-Shaped Representations
Grigioni et al. Safe road-crossing by autonomous wheelchairs: a novel dataset and its experimental evaluation
CN114998861A (en) Method and device for detecting distance between vehicle and obstacle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: Room 503-3, 398 Jiangsu Road, Changning District, Shanghai 200050

Patentee after: Shanghai Xijing Technology Co.,Ltd.

Address before: Room 503-3, 398 Jiangsu Road, Changning District, Shanghai 200050

Patentee before: SHANGHAI WESTWELL INFORMATION AND TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder