WO2022121766A1 - 一种可行驶区域的检测方法及装置 - Google Patents
一种可行驶区域的检测方法及装置 Download PDFInfo
- Publication number
- WO2022121766A1 WO2022121766A1 PCT/CN2021/135028 CN2021135028W WO2022121766A1 WO 2022121766 A1 WO2022121766 A1 WO 2022121766A1 CN 2021135028 W CN2021135028 W CN 2021135028W WO 2022121766 A1 WO2022121766 A1 WO 2022121766A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- target
- road image
- drivable
- points
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000002372 labelling Methods 0.000 claims abstract description 56
- 230000011218 segmentation Effects 0.000 claims abstract description 32
- 238000001914 filtration Methods 0.000 claims abstract description 14
- 238000012549 training Methods 0.000 claims description 32
- 238000001514 detection method Methods 0.000 claims description 21
- 238000013136 deep learning model Methods 0.000 claims description 8
- 230000005855 radiation Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000004904 shortening Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
Definitions
- the present invention relates to the technical field of image processing, and more particularly, to a method and device for detecting a drivable area.
- Driving area (FreeSpace) detection technology is the key technology of assisted driving system and automatic driving system.
- the detection method of the drivable area is as follows: the image collected by the vehicle camera is divided into different areas according to different objects, and then the drivable area is identified from the divided areas.
- the present invention discloses a method and device for detecting a drivable area.
- identifying the drivable area only the contact point between the target object and the ground is used as the labeling point for labeling, so a lot of labeling work is saved. This not only shortens the labeling time, but also reduces the subsequent processing workload of labeling to a certain extent, thereby improving the detection efficiency of the drivable area.
- a method for detecting a drivable area comprising:
- the target road image is input to the drivable area semantic segmentation model obtained by pre-training, and the pixels in the target road image are classified pixel by pixel to obtain the predicted pixel category information of each of the pixels;
- a drivable area in the current road image is determined.
- the training process of the drivable area semantic segmentation model includes:
- the road image containing the object category annotation results is used as the original image input by the model, and a ground-truth image with the same image size as the original image is generated, wherein each pixel in the ground-truth image records that the pixel is Labeled pixel category information;
- the deep learning model is trained to obtain a drivable area semantic segmentation model.
- the pixel point category information includes: vehicles, pedestrians, curbs, fences, and non-contact points.
- the determining the drivable area in the current road image based on the contact point set specifically includes:
- a preset number of boundary points closest to the vehicle are sampled from the closed curve as target boundary points, and the drivable area formed by the target boundary points is output.
- a detection device for a drivable area comprising:
- a first labeling unit configured to label all the contact points of the target object and the ground in the current road image as labeling points
- the connecting unit is used to connect all the marked contact points to obtain a non-closed polyline showing the contact contours of all the target objects and the ground;
- a second labeling unit configured to label the object category corresponding to each target object in the non-closed polyline, to obtain a target road image including the object category labeling result
- the pixel point classification unit is used to input the target road image into the drivable area semantic segmentation model obtained by pre-training, and classify the pixel points in the target road image pixel by pixel to obtain the pixel point of each pixel. Predict pixel category information;
- a filtering clustering unit is used to filter and cluster all the marked contact points according to the predicted pixel point category information and position information of each pixel point in the target road image to obtain different types of contact point sets;
- a drivable area determination unit configured to determine a drivable area in the current road image based on the set of contact points.
- it also includes: a model training unit;
- the model training unit is specifically used for:
- the road image containing the object category annotation results is used as the original image input by the model, and a ground-truth image with the same image size as the original image is generated, wherein each pixel in the ground-truth image records that the pixel is Labeled pixel category information;
- the deep learning model is trained to obtain a drivable area semantic segmentation model.
- the pixel point category information includes: vehicles, pedestrians, curbs, fences, and non-contact points.
- the drivable area determination unit specifically includes:
- a smoothing filtering subunit configured to perform smooth filtering on the set of contact points to obtain a sequence of drivable boundary points in the current road image and the pixel coordinates corresponding to each drivable boundary point;
- a coordinate conversion subunit used for converting the pixel coordinates corresponding to each of the drivable boundary points to the world coordinate system to obtain the target pixel coordinates of each of the drivable boundary points;
- connection subunit configured to connect the drivable boundary point sequence into a closed curve of the drivable area in the current road image based on the coordinates of the target pixel point;
- the sampling sub-unit is used to sample a preset number of boundary points closest to the vehicle from the closed curve by means of lidar radiation as target boundary points, and output the drivable formed by the target boundary points area.
- the present invention discloses a method and device for detecting a drivable area.
- the contact points of all target objects and the ground in the acquired current road image are marked as marked points, and all the marked contact points are marked.
- Input to the drivable area semantic segmentation model classify the pixels in the target road image pixel by pixel, and obtain the predicted pixel category information of each pixel, according to the predicted pixel category information of each pixel in the target road image and location information, filter and cluster all the annotated contact points to obtain different types of contact point sets, and determine the drivable area in the current road image based on the contact point sets.
- the present invention only uses the contact point between the target object and the ground as the labeling point when identifying the drivable area. Therefore, compared with the traditional scheme, the boundary point of the object and the inner area of the boundary point are both used as the labeling point. In terms of labeling, the present invention saves a lot of labeling work, thereby not only shortening the labeling time, but also reducing the subsequent processing workload of labeling to a certain extent, thereby improving the detection efficiency of the drivable area.
- FIG. 1 is a flowchart of a method for detecting a drivable area disclosed in an embodiment of the present invention
- FIG. 2 is a flowchart of a method for determining a drivable area in a current road image based on a set of contact points disclosed in an embodiment of the present invention
- FIG. 3 is a schematic structural diagram of a detection device for a drivable area disclosed in an embodiment of the present invention.
- FIG. 4 is a schematic structural diagram of a driveable area determination unit disclosed in an embodiment of the present invention.
- a flowchart of a method for detecting a drivable area disclosed in an embodiment of the present invention includes:
- Step S101 obtaining a current road image
- the current road image can be collected by a camera installed on the vehicle.
- Step S102 marking the contact points of all the target objects and the ground in the current road image as marking points
- the target objects include: vehicles, pedestrians, curbs, and fences.
- the present invention only uses the contact points of the target object and the ground as marking points, thereby saving a lot of Labeling work.
- the present invention marks the contact point between the target object and the ground as the mark point instead of marking the boundary point of the target object is that it is convenient for subsequent ranging. If the labeled object as the target object is labeled, then when the boundary point of the labeled object is not on the ground, there will be a deviation in the process of ranging. In the field of automatic driving, the ground is usually used as a reliable reference, and based on this, the present invention marks the contact point between the target object and the ground as a labeling point.
- Step S103 connecting all the marked contact points to obtain a non-closed polyline showing the contact contours of all the target objects and the ground;
- non-closed polyline means that the head and tail of the line connecting all the label points are not connected.
- the closed polyline corresponding to the non-closed polyline refers to: the head and tail of the connecting lines of all the marked points are connected, for example, the connecting lines of all the points marked on a certain area in the prior art.
- Step S104 labeling the object category corresponding to each target object in the non-closed polyline, to obtain a target road image including the object category labeling result;
- the labeled object categories include: vehicles, pedestrians, curbs, and fences.
- Step S105 inputting the target road image into the drivable area semantic segmentation model obtained by pre-training, and classifying the pixels in the target road image pixel by pixel to obtain the predicted pixel category of each pixel. information;
- the predicted pixel category information includes: vehicles, pedestrians, curbs, fences and non-contact points. Since only the contact points of vehicles, pedestrians, curbs, fences and the ground are marked during labeling, other unlabeled pixels are automatically classified as non-contact points.
- Step S106 according to the predicted pixel category information and position information of each pixel in the target road image, filter and cluster all the marked contact points to obtain different types of contact point sets;
- the predicted pixel category information and position information of each pixel in the target road image are determined, the predicted pixel category information corresponding to all the marked contact points can be determined, so as to obtain the category of each contact point. forecast information. By clustering the contact points of the pixel point prediction information of the same category, the contact point sets of different categories can be obtained.
- Step S107 Determine a drivable area in the current road image based on the contact point set.
- each area in the current road image can be identified, so that the drivable area can be determined.
- the detection method of the drivable area disclosed in the present invention takes all the contact points of the target object and the ground in the acquired current road image as marked points, and connects all the marked contact points to obtain a display showing all the contact points.
- the non-closed polyline of the contact contour between the target object and the ground annotate the object category corresponding to each target object in the non-closed polyline, obtain the target road image containing the object category annotation results, and input the target road image into the drivable area semantic segmentation
- the model classifies the pixels in the target road image pixel by pixel to obtain the predicted pixel category information of each pixel.
- the labeled All contact points are filtered and clustered to obtain different types of contact point sets.
- the drivable area in the current road image is determined. It can be seen from this that the present invention only uses the contact point between the target object and the ground as the labeling point when identifying the drivable area. Therefore, compared with the traditional scheme, the boundary point of the object and the inner area of the boundary point are both used as the labeling point. In terms of labeling, the present invention saves a lot of labeling work, thereby not only shortening the labeling time, but also reducing the subsequent processing workload of labeling to a certain extent, thereby improving the detection efficiency of the drivable area.
- the present invention also provides the training process of the semantic segmentation model of the drivable area, which is as follows:
- the road image is marked with object types.
- the road image containing the object category labeling result is used as the original image input by the model, and the true value image of the same image size as the original image is generated;
- each pixel in the ground-truth image records the category information of the pixel to which the pixel is marked.
- pixel point category information includes: vehicles, pedestrians, curbs, fences, and non-contact points. Since only the contact points of vehicles, pedestrians, curbs, fences and the ground are marked during labeling, other unlabeled pixels are automatically classified as non-contact points.
- the drivable area semantic segmentation model is used to classify the original image pixel by pixel to obtain the predicted pixel category information of each pixel.
- the semantic segmentation model of the drivable area is obtained by using a deep learning model to perform semantic segmentation training on sample images.
- the present invention collects the road video data collected by the vehicle cameras in 4 major cities and multiple scenes with a total duration of 100 hours, and randomly samples all the road video data to generate From the image pool of 100,000, 50,000 images are selected from the image pool as training samples according to business requirements.
- the images in the training samples need to contain data from multiple scenes as much as possible, such as different city roads, different weather, different time periods, etc. Wait. At the same time, training samples also need to consider the balance between multiple object categories.
- the present invention performs object edge labeling on each image in the training sample. For example, high road edges, low road edges, pedestrian edges, cyclist edges, vehicle edges and road barrier edges are respectively labeled. .
- the pytorch platform is used for model training, and multi-machine multi-card training is realized on multiple servers.
- the U-shape segmentation framework is adopted, combined with the pre-designed backbone network to obtain the drivable area detection model.
- the present invention designs a backbone network in combination with dilated convolution and separable convolution, and the backbone network has the characteristics of larger field of view and light weight.
- the asymmetric U-shape coding is used to parse the network structure to improve the perception ability of the semantic segmentation model of the drivable area to spatial and semantic information.
- FIG. 2 a flowchart of a method for determining a drivable area in a current road image based on a set of contact points disclosed in an embodiment of the present invention, that is, step S107 in the embodiment shown in FIG. 1 .
- it can include:
- Step S201 performing smooth filtering on the set of contact points to obtain a sequence of drivable boundary points in the current road image and the pixel coordinates corresponding to each drivable boundary point;
- Step S202 converting the pixel coordinates corresponding to each of the drivable boundary points to the world coordinate system to obtain the target pixel coordinates of each of the drivable boundary points;
- Step S203 connecting the drivable boundary point sequence into a closed curve of the drivable area in the current road image based on the coordinates of the target pixel point;
- Step S204 Using the laser radar radiation method, sample a preset number of boundary points closest to the vehicle from the closed curve as target boundary points, and output the drivable area formed by the target boundary points.
- the present invention also discloses a detection device for a drivable area.
- a schematic structural diagram of a detection device for a drivable area disclosed in an embodiment of the present invention includes:
- an acquisition unit 301 configured to acquire a current road image
- the current road image can be collected by a camera installed on the vehicle.
- a first labeling unit 302 configured to label all the contact points of the target objects and the ground in the current road image as labeling points;
- the target objects include: vehicles, pedestrians, curbs, and fences.
- the present invention only uses the contact points of the target object and the ground as marking points, thereby saving a lot of Labeling work.
- the present invention marks the contact point between the target object and the ground as the mark point instead of marking the boundary point of the target object is that it is convenient for subsequent ranging. If the labeled object as the target object is labeled, then when the boundary point of the labeled object is not on the ground, there will be a deviation in the process of ranging. In the field of automatic driving, the ground is usually used as a reliable reference, and based on this, the present invention marks the contact point between the target object and the ground as a labeling point.
- the connecting unit 303 is configured to connect all the marked contact points to obtain a non-closed polyline showing all the contact contours of the target object and the ground;
- non-closed polyline means that the head and tail of the line connecting all the label points are not connected.
- the closed polyline corresponding to the non-closed polyline refers to: the head and tail of the connecting lines of all the marked points are connected, for example, the connecting lines of all the points marked on a certain area in the prior art.
- the second labeling unit 304 is configured to label the object category corresponding to each target object in the non-closed polyline, and obtain a target road image including the object category labeling result;
- the labeled object categories include: vehicles, pedestrians, curbs, and fences.
- the pixel point classification unit 305 is used to input the target road image into the drivable area semantic segmentation model obtained by pre-training, and perform pixel-by-pixel classification on the pixel points in the target road image to obtain each pixel point. Predicted pixel category information;
- the predicted pixel category information includes: vehicles, pedestrians, curbs, fences and non-contact points. Since only the contact points of vehicles, pedestrians, curbs, fences and the ground are marked during labeling, other unlabeled pixels are automatically classified as non-contact points.
- the filtering and clustering unit 306 is used for filtering and clustering all the marked contact points according to the predicted pixel category information and position information of each pixel in the target road image to obtain different types of contact point sets;
- the predicted pixel category information and position information of each pixel in the target road image are determined, the predicted pixel category information corresponding to all the marked contact points can be determined, so as to obtain the category of each contact point. forecast information. By clustering the contact points of the pixel point prediction information of the same category, the contact point sets of different categories can be obtained.
- the drivable area determination unit 307 is configured to determine the drivable area in the current road image based on the set of contact points.
- each area in the current road image can be identified, so that the drivable area can be determined.
- the detection device for the drivable area disclosed in the present invention takes all the contact points of the target objects and the ground in the acquired current road image as marked points, and connects all the marked contact points to obtain a display showing all the contact points.
- the non-closed polyline of the contact contour between the target object and the ground annotate the object category corresponding to each target object in the non-closed polyline, obtain the target road image containing the object category annotation results, and input the target road image into the drivable area semantic segmentation
- the model classifies the pixels in the target road image pixel by pixel to obtain the predicted pixel category information of each pixel.
- the labeled All contact points are filtered and clustered to obtain different types of contact point sets.
- the drivable area in the current road image is determined. It can be seen from this that the present invention only uses the contact point between the target object and the ground as the labeling point when identifying the drivable area. Therefore, compared with the traditional scheme, the boundary point of the object and the inner area of the boundary point are both used as the labeling point. In terms of labeling, the present invention saves a lot of labeling work, thereby not only shortening the labeling time, but also reducing the subsequent processing workload of labeling to a certain extent, thereby improving the detection efficiency of the drivable area.
- the present invention also provides a training process for the semantic segmentation model of the drivable area, and the detection device may further include: a model training unit;
- the model training unit is specifically used for:
- the road image containing the object category annotation results is used as the original image input by the model, and a ground-truth image with the same image size as the original image is generated, wherein each pixel in the ground-truth image records that the pixel is Labeled pixel category information;
- the deep learning model is trained to obtain a drivable area semantic segmentation model.
- the drivable area semantic segmentation model is used to classify the original image pixel by pixel to obtain the predicted pixel category information of each pixel.
- the semantic segmentation model of the drivable area is obtained by using a deep learning model to perform semantic segmentation training on sample images.
- the drivable area determination unit includes:
- a smoothing filtering subunit 401 configured to perform smooth filtering on the set of contact points to obtain a sequence of drivable boundary points in the current road image and the pixel coordinates corresponding to each drivable boundary point;
- the coordinate conversion subunit 402 is used to convert the coordinates of the pixel points corresponding to each of the drivable boundary points to the world coordinate system to obtain the target pixel coordinates of each of the drivable boundary points;
- connection subunit 403 configured to connect the drivable boundary point sequence into a closed curve of the drivable area in the current road image based on the coordinates of the target pixel point;
- the sampling sub-unit 404 is used for sampling a preset number of boundary points closest to the vehicle from the closed curve using the laser radar radiation method as target boundary points, and outputting the possible boundary points formed by the target boundary points. driving area.
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
Claims (8)
- 一种可行驶区域的检测方法,其特征在于,包括:获取当前道路图像;将所述当前道路图像中所有的目标对象和地面的接触点作为标注点进行标注;对标注的所有接触点进行连线,得到一条显示所有的所述目标对象和所述地面的接触轮廓的非闭合折线;对所述非闭合折线中各个目标对象对应的物体类别进行标注,得到包含物体类别标注结果的目标道路图像;将所述目标道路图像输入至预先训练得到的可行驶区域语义分割模型,对所述目标道路图像中的像素点进行逐个像素点分类,得到每个所述像素点的预测像素点类别信息;根据所述目标道路图像中每个像素点的预测像素点类别信息和位置信息,对标注的所有接触点进行过滤聚类,得到不同类别的接触点集合;基于所述接触点集合,确定所述当前道路图像中的可行驶区域。
- 根据权利要求1所述的检测方法,其特征在于,所述可行驶区域语义分割模型的训练过程包括:将包含物体类别标注结果的道路图像作为模型输入的原图像,生成和所述原图像相同图像大小的真值图像,其中,所述真值图像中的每个像素点记录的是该像素点被标注的像素点类别信息;将所述原图像作为训练样本,将所述真值图像作为样本标签,对深度学习模型进行训练得到可行驶区域语义分割模型。
- 根据权利要求2所述的检测方法,其特征在于,所述像素点类别信息包括:车辆、行人、马路牙子、栅栏和非接触点。
- 根据权利要求1所述的检测方法,其特征在于,所述基于所述接触点集合,确定所述当前道路图像中的可行驶区域,具体包括:对所述接触点集合进行平滑滤波,得到所述当前道路图像中可行驶边界点序列和每个可行驶边界点对应的像素点坐标;将每个所述可行驶边界点对应的像素点坐标转换到世界坐标系下,得到每个所述可行驶边界点的目标像素点坐标;基于所述目标像素点坐标,将所述可行驶边界点序列连接成所述当前道路图像中可行驶区域的封闭曲线;采用激光雷达辐射方式,从所述封闭曲线中采样出预设数量的距离本车辆最近的边界点作为目标边界点,并输出有所述目标边界点形成的所述可行驶区域。
- 一种可行驶区域的检测装置,其特征在于,包括:获取单元,用于获取当前道路图像;第一标注单元,用于将所述当前道路图像中所有的目标对象和地面的接触点作为标注点进行标注;连线单元,用于对标注的所有接触点进行连线,得到一条显示所有的所述目标对象和所述地面的接触轮廓的非闭合折线;第二标注单元,用于对所述非闭合折线中各个目标对象对应的物体类别进行标注,得到包含物体类别标注结果的目标道路图像;像素点分类单元,用于将所述目标道路图像输入至预先训练得到的可行驶区域语义分割模型,对所述目标道路图像中的像素点进行逐个像素点分类,得到每个所述像素点的预测像素点类别信息;过滤聚类单元,用于根据所述目标道路图像中每个像素点的预测像素点类别信息和位置信息,对标注的所有接触点进行过滤聚类,得到不同类别的接触点集合;可行驶区域确定单元,用于基于所述接触点集合,确定所述当前道路图像中的可行驶区域。
- 根据权利要求5所述的检测装置,其特征在于,还包括:模型训练单元;所述模型训练单元具体用于:将包含物体类别标注结果的道路图像作为模型输入的原图像,生成和所述原图像相同图像大小的真值图像,其中,所述真值图像中的每个像素点记录的是该像素点被标注的像素点类别信息;将所述原图像作为训练样本,将所述真值图像作为样本标签,对深度学习模型进行训练得到可行驶区域语义分割模型。
- 根据权利要求6所述的检测装置,其特征在于,所述像素点类别信息包括:车辆、行人、马路牙子、栅栏和非接触点。
- 根据权利要求5所述的检测装置,其特征在于,所述可行驶区域确定单元具体包括:平滑滤波子单元,用于对所述接触点集合进行平滑滤波,得到所述当前道路图像中可行驶边界点序列和每个可行驶边界点对应的像素点坐标;坐标转换子单元,用于将每个所述可行驶边界点对应的像素点坐标转换到世界坐标系下,得到每个所述可行驶边界点的目标像素点坐标;连接子单元,用于基于所述目标像素点坐标,将所述可行驶边界点序列连接成所述当前道路图像中可行驶区域的封闭曲线;采样子单元,用于采用激光雷达辐射方式,从所述封闭曲线中采样出预设数量的距离本车辆最近的边界点作为目标边界点,并输出有所述目标边界点形成的所述可行驶区域。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011416890.6 | 2020-12-07 | ||
CN202011416890.6A CN112200172B (zh) | 2020-12-07 | 2020-12-07 | 一种可行驶区域的检测方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022121766A1 true WO2022121766A1 (zh) | 2022-06-16 |
Family
ID=74034402
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/135028 WO2022121766A1 (zh) | 2020-12-07 | 2021-12-02 | 一种可行驶区域的检测方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112200172B (zh) |
WO (1) | WO2022121766A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116580286A (zh) * | 2023-07-12 | 2023-08-11 | 宁德时代新能源科技股份有限公司 | 图像标注方法、装置、设备和存储介质 |
CN116884003A (zh) * | 2023-07-18 | 2023-10-13 | 南京领行科技股份有限公司 | 图片自动标注方法、装置、电子设备及存储介质 |
CN118050300A (zh) * | 2024-04-16 | 2024-05-17 | 河北天辰仪器设备有限公司 | 一种土工布智能垂直渗透系数测定方法及测定装置 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112200172B (zh) * | 2020-12-07 | 2021-02-19 | 天津天瞳威势电子科技有限公司 | 一种可行驶区域的检测方法及装置 |
CN113191256B (zh) * | 2021-04-28 | 2024-06-11 | 北京百度网讯科技有限公司 | 车道线检测模型的训练方法、装置、电子设备及存储介质 |
CN113963061B (zh) * | 2021-10-29 | 2024-07-12 | 广州文远知行科技有限公司 | 路沿分布信息获取方法、装置、电子设备和存储介质 |
CN114626468B (zh) * | 2022-03-17 | 2024-02-09 | 小米汽车科技有限公司 | 在图像中生成阴影的方法、装置、电子设备及存储介质 |
CN116052122B (zh) * | 2023-01-28 | 2023-06-27 | 广汽埃安新能源汽车股份有限公司 | 一种可行驶空间的检测方法、装置、电子设备及存储介质 |
CN115877405A (zh) * | 2023-01-31 | 2023-03-31 | 小米汽车科技有限公司 | 可行驶区域的检测方法、装置及车辆 |
CN118314334B (zh) * | 2024-06-07 | 2024-10-11 | 比亚迪股份有限公司 | 障碍物接地框确定方法、控制器、车辆、存储介质及程序 |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008051612A (ja) * | 2006-08-24 | 2008-03-06 | Hitachi Ltd | ランドマーク認識システム |
JP2013015341A (ja) * | 2011-06-30 | 2013-01-24 | Aisin Aw Co Ltd | 参照データ取得装置、参照データ取得システム、参照データ取得方法、及び参照データ取得プログラム |
US20150142248A1 (en) * | 2013-11-20 | 2015-05-21 | Electronics And Telecommunications Research Institute | Apparatus and method for providing location and heading information of autonomous driving vehicle on road within housing complex |
CN105793669A (zh) * | 2013-12-06 | 2016-07-20 | 日立汽车系统株式会社 | 车辆位置推定系统、装置、方法以及照相机装置 |
CN106485233A (zh) * | 2016-10-21 | 2017-03-08 | 深圳地平线机器人科技有限公司 | 可行驶区域检测方法、装置和电子设备 |
CN107481284A (zh) * | 2017-08-25 | 2017-12-15 | 京东方科技集团股份有限公司 | 目标物跟踪轨迹精度测量的方法、装置、终端及系统 |
CN109117690A (zh) * | 2017-06-23 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | 可行驶区域检测方法、装置、设备及存储介质 |
CN109313710A (zh) * | 2018-02-02 | 2019-02-05 | 深圳蓝胖子机器人有限公司 | 目标识别模型训练方法、目标识别方法、设备及机器人 |
CN110210363A (zh) * | 2019-05-27 | 2019-09-06 | 中国科学技术大学 | 一种基于车载图像的目标车辆压线检测方法 |
CN110490238A (zh) * | 2019-08-06 | 2019-11-22 | 腾讯科技(深圳)有限公司 | 一种图像处理方法、装置及存储介质 |
CN112200172A (zh) * | 2020-12-07 | 2021-01-08 | 天津天瞳威势电子科技有限公司 | 一种可行驶区域的检测方法及装置 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106228134A (zh) * | 2016-07-21 | 2016-12-14 | 北京奇虎科技有限公司 | 基于路面图像的可行驶区域检测方法、装置及系统 |
US10769793B2 (en) * | 2018-04-17 | 2020-09-08 | Baidu Usa Llc | Method for pitch angle calibration based on 2D bounding box and its 3D distance for autonomous driving vehicles (ADVs) |
CN110599497A (zh) * | 2019-07-31 | 2019-12-20 | 中国地质大学(武汉) | 一种基于深度神经网络的可行驶区域分割方法 |
CN110907949A (zh) * | 2019-10-28 | 2020-03-24 | 福瑞泰克智能系统有限公司 | 一种自动驾驶可行驶区域检测方法、系统及车辆 |
CN110809254A (zh) * | 2019-10-29 | 2020-02-18 | 天津大学 | 一种在城市vanet中基于停车区域的蜘蛛网路由协议 |
CN111104893B (zh) * | 2019-12-17 | 2022-09-20 | 苏州智加科技有限公司 | 目标检测方法、装置、计算机设备及存储介质 |
-
2020
- 2020-12-07 CN CN202011416890.6A patent/CN112200172B/zh active Active
-
2021
- 2021-12-02 WO PCT/CN2021/135028 patent/WO2022121766A1/zh active Application Filing
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008051612A (ja) * | 2006-08-24 | 2008-03-06 | Hitachi Ltd | ランドマーク認識システム |
JP2013015341A (ja) * | 2011-06-30 | 2013-01-24 | Aisin Aw Co Ltd | 参照データ取得装置、参照データ取得システム、参照データ取得方法、及び参照データ取得プログラム |
US20150142248A1 (en) * | 2013-11-20 | 2015-05-21 | Electronics And Telecommunications Research Institute | Apparatus and method for providing location and heading information of autonomous driving vehicle on road within housing complex |
CN105793669A (zh) * | 2013-12-06 | 2016-07-20 | 日立汽车系统株式会社 | 车辆位置推定系统、装置、方法以及照相机装置 |
CN106485233A (zh) * | 2016-10-21 | 2017-03-08 | 深圳地平线机器人科技有限公司 | 可行驶区域检测方法、装置和电子设备 |
CN109117690A (zh) * | 2017-06-23 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | 可行驶区域检测方法、装置、设备及存储介质 |
CN107481284A (zh) * | 2017-08-25 | 2017-12-15 | 京东方科技集团股份有限公司 | 目标物跟踪轨迹精度测量的方法、装置、终端及系统 |
CN109313710A (zh) * | 2018-02-02 | 2019-02-05 | 深圳蓝胖子机器人有限公司 | 目标识别模型训练方法、目标识别方法、设备及机器人 |
CN110210363A (zh) * | 2019-05-27 | 2019-09-06 | 中国科学技术大学 | 一种基于车载图像的目标车辆压线检测方法 |
CN110490238A (zh) * | 2019-08-06 | 2019-11-22 | 腾讯科技(深圳)有限公司 | 一种图像处理方法、装置及存储介质 |
CN112200172A (zh) * | 2020-12-07 | 2021-01-08 | 天津天瞳威势电子科技有限公司 | 一种可行驶区域的检测方法及装置 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116580286A (zh) * | 2023-07-12 | 2023-08-11 | 宁德时代新能源科技股份有限公司 | 图像标注方法、装置、设备和存储介质 |
CN116580286B (zh) * | 2023-07-12 | 2023-11-03 | 宁德时代新能源科技股份有限公司 | 图像标注方法、装置、设备和存储介质 |
CN116884003A (zh) * | 2023-07-18 | 2023-10-13 | 南京领行科技股份有限公司 | 图片自动标注方法、装置、电子设备及存储介质 |
CN116884003B (zh) * | 2023-07-18 | 2024-03-22 | 南京领行科技股份有限公司 | 图片自动标注方法、装置、电子设备及存储介质 |
CN118050300A (zh) * | 2024-04-16 | 2024-05-17 | 河北天辰仪器设备有限公司 | 一种土工布智能垂直渗透系数测定方法及测定装置 |
Also Published As
Publication number | Publication date |
---|---|
CN112200172B (zh) | 2021-02-19 |
CN112200172A (zh) | 2021-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022121766A1 (zh) | 一种可行驶区域的检测方法及装置 | |
Kulkarni et al. | Traffic light detection and recognition for self driving cars using deep learning | |
CN105046196B (zh) | 基于级联卷积神经网络的前车车辆信息结构化输出方法 | |
CN110751655B (zh) | 一种基于语义分割和显著性分析的自动抠图方法 | |
Bhattacharya et al. | Devanagari and bangla text extraction from natural scene images | |
WO2018233038A1 (zh) | 基于深度学习的车牌识别方法、装置、设备及存储介质 | |
CN111507226B (zh) | 道路图像识别模型建模方法、图像识别方法及电子设备 | |
CN106778736B (zh) | 一种鲁棒的车牌识别方法及其系统 | |
JP2016062610A (ja) | 特徴モデル生成方法及び特徴モデル生成装置 | |
WO2009114967A1 (zh) | 基于移动扫描的图像处理方法及装置 | |
CN111898491A (zh) | 一种车辆逆向行驶的识别方法、装置及电子设备 | |
CN111008576A (zh) | 行人检测及其模型训练、更新方法、设备及可读存储介质 | |
CN111199050B (zh) | 一种用于对病历进行自动脱敏的系统及应用 | |
WO2023155581A1 (zh) | 一种图像检测方法和装置 | |
CN114820679B (zh) | 图像标注方法、装置、电子设备和存储介质 | |
CN114419603A (zh) | 一种自动驾驶车辆控制方法、系统和自动驾驶车辆 | |
Ali et al. | IRUVD: a new still-image based dataset for automatic vehicle detection | |
Karungaru et al. | Road traffic signs recognition using genetic algorithms and neural networks | |
Zhang et al. | Opensight: A simple open-vocabulary framework for lidar-based object detection | |
Chattopadhyay et al. | On the enhancement and binarization of mobile captured Vehicle Identification Number for an embedded solution | |
EP3631696B1 (en) | Artificial neural network | |
Satish et al. | Edge assisted fast binarization scheme for improved vehicle license plate recognition | |
CN113392852B (zh) | 一种基于深度学习的车辆检测方法及系统 | |
Rahmani et al. | IR-LPR: A Large Scale Iranian License Plate Recognition Dataset | |
Mahajan et al. | Text extraction from indian and non-indian natural scene images: A review |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21902473 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21902473 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.12.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21902473 Country of ref document: EP Kind code of ref document: A1 |