WO2021013227A1 - 用于目标检测的图像处理方法及装置 - Google Patents

用于目标检测的图像处理方法及装置 Download PDF

Info

Publication number
WO2021013227A1
WO2021013227A1 PCT/CN2020/103849 CN2020103849W WO2021013227A1 WO 2021013227 A1 WO2021013227 A1 WO 2021013227A1 CN 2020103849 W CN2020103849 W CN 2020103849W WO 2021013227 A1 WO2021013227 A1 WO 2021013227A1
Authority
WO
WIPO (PCT)
Prior art keywords
edge
target
category
edge point
group
Prior art date
Application number
PCT/CN2020/103849
Other languages
English (en)
French (fr)
Inventor
宫原俊二
Original Assignee
长城汽车股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长城汽车股份有限公司 filed Critical 长城汽车股份有限公司
Priority to EP20844715.1A priority Critical patent/EP3979196A4/en
Publication of WO2021013227A1 publication Critical patent/WO2021013227A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/168Segmentation; Edge detection involving transform domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the invention relates to the technical field of intelligent transportation and image processing, in particular to an image processing method and device for target detection.
  • AD Autonomous driving
  • ADAS Advanced Driver Assistance System
  • sensors supporting AD/ADAS mainly include radar, vision camera system (hereinafter also referred to as camera), lidar, ultrasonic sensor, etc., among which the vision camera system can obtain the same two-dimensional image information as human vision. It is the most widely used, and its typical applications include lane detection, object detection, vehicle detection, pedestrian detection, cyclist detection and other designated target detection.
  • the edge extraction and other processing are performed on the image, and the objects and environmental information in the captured image can be extracted.
  • target A is a small target that needs to be detected (such as a traffic cone)
  • object B is an object that does not need to be detected in the background, but the edges of target A and object B are connected or overlapped (for example, the dotted line circled It is difficult to distinguish the target A from the object B in the image captured by the camera, and the recognition of the target A is inaccurate.
  • target A or object B is also likely to be a self-shadowing object, such as the sun, and the edge of the corresponding object B or target A may disappear in the shadow, thereby further exacerbating the separation of target A And the difficulty of the edge of object B.
  • the present invention aims to provide an image processing method for target detection to at least partially solve the above technical problems.
  • An image processing method for target detection including:
  • the following edge point separation processing is performed: select from the edge point group Edge points to form a first category and a second category, respectively, wherein the first category includes edge points formed as a main group, and the second category includes other edge point groups other than the main group Edge points, wherein the height of the edge points in the main group in the image is less than the height of the target; linear regression processing is performed on the edge points in the main group to obtain a corresponding linear regression line; Select the edge points one by one in the second category, and calculate the deviation of the selected edge point relative to the linear regression line; if the deviation is less than the preset standard deviation, add the selected edge point to the main group; and repeat The above processing until the deviation is greater than or equal to the preset standard deviation to form the final main group; and
  • the target is created based on the edge points in the final main group for target detection.
  • selecting edge points from the edge point group to form the first category includes: starting from the bottom position of the edge point group, sequentially selecting edge points at a set height position in order of increasing height to form the In the first category, the selected edge point needs to meet the condition that the initial height is less than the height of the target.
  • the initial height of the edge points in the first category in the image is two-thirds of the height of the target.
  • selecting the edge points from the second category one by one includes: starting from the edge points in the second category at the lowest height position, selecting the edge points one by one for deviation calculation.
  • the image processing method further includes: forming the unselected edge points in the second category and the edge points whose deviation is greater than or equal to the preset standard deviation into a secondary group; and discarding the Secondary group.
  • the image processing method for target detection of the present invention has the following advantages: the method of the embodiment of the present invention successfully separates the connected or overlapping edge groups in the image taken by the vehicle camera, and realizes the More accurate detection of targets.
  • Another object of the present invention is to provide an image processing device for target detection, so as to at least partially solve the above technical problems.
  • An image processing device for target detection including:
  • An image preprocessing module for determining an analysis area that can cover the target in an image captured by a camera on a vehicle, and preprocessing the image in the analysis area to obtain an edge point group for the target;
  • the edge point separation module is used to perform the following edge point separation processing if the peripheral edge line of the target and the peripheral edge line of the object outside the target are connected or overlapped in the same edge point group:
  • An edge point is selected from the edge point group to form a first category and a second category, respectively, wherein the first category includes edge points formed as a main group, and the second category includes the divisions in the edge point group.
  • edge points outside the main group wherein the height of the edge points in the main group in the image is less than the height of the target; linear regression processing is performed on the edge points in the main group to obtain the corresponding Linear regression line; select edge points from the second category one by one, and calculate the deviation of the selected edge point relative to the linear regression line; if the deviation is less than the preset standard deviation, add the selected edge point to The main group; and repeat the above processing until the deviation is greater than or equal to the preset standard deviation to form the final main group; and
  • the target creation module is configured to create the target based on the edge points in the final main group for target detection.
  • the edge point separating module is configured to select edge points from the edge point group to form the first category, including: starting from the bottom position of the edge point group, sequentially selecting to setting in the order of increasing height The edge points of the height position form the first category, wherein the selected edge points need to satisfy the condition that the initial height is smaller than the height of the target.
  • the initial height of the edge points in the first category in the image is two-thirds of the height of the target.
  • edge point separation module is used to select the edge points from the second category one by one includes: starting from the edge points at the lowest height position in the second category, selecting the edge points one by one for deviation calculation.
  • the edge point separation module is further configured to: form edge points that are not selected in the second category and edge points whose deviation is greater than or equal to the preset standard deviation into a secondary group; and discard The secondary group.
  • Another objective of the present invention is to provide a machine-readable storage medium and a processor to at least partially solve the above technical problems.
  • a machine-readable storage medium has instructions stored on the machine-readable storage medium for causing a machine to execute the above-mentioned image processing method for target detection.
  • a processor for executing instructions stored in the machine-readable storage medium.
  • the machine-readable storage medium and the processor have the same advantages as the foregoing image processing method over the prior art, and will not be repeated here.
  • Figure 1 is a schematic diagram of the edge connection or overlap between the object and the target, which causes the target detection error
  • FIG. 2 is a schematic diagram of the flow from image collection to ternary image in an embodiment of the present invention
  • Fig. 3 is a schematic diagram of a flow from a ternary image to a line segment in an embodiment of the present invention
  • Figures 4(a) and 4(b) are exemplary diagrams of the pairing process in the embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of an image processing method for target detection according to an embodiment of the present invention.
  • Fig. 6(a) is a schematic diagram of the positive edge group of the target obtained by image processing the traffic cone in the example of the embodiment of the present invention
  • Fig. 6(b) is the image processing obtained on the traffic cone in the example of the embodiment of the present invention
  • a schematic diagram of the negative edge group of the target is a schematic diagram of the target created after pairing in this example;
  • FIG. 7 is a schematic flowchart of edge point separation processing in an embodiment of the present invention.
  • Fig. 8(a) is a schematic diagram of classifying positive edge groups in an example of an embodiment of the present invention
  • Fig. 8(b) is a schematic diagram of dividing a primary group and a secondary group in an example of an embodiment of the present invention
  • Fig. 8(c) Is a schematic diagram of the target created after pairing with the main group in this example;
  • Fig. 9 is a schematic structural diagram of an image processing device for target detection according to an embodiment of the present invention.
  • Image preprocessing module 920, edge point separation module; 930, target creation module.
  • the targets mentioned in the embodiments of the present invention include lane lines and pedestrians, vehicles, road signs, and roadblock equipment (such as traffic cones) located within the camera range of the front vehicle camera.
  • roadblock equipment such as traffic cones located within the camera range of the front vehicle camera.
  • edge point group can be understood interchangeably in corresponding scenarios.
  • the target detection process in the embodiment of the present invention mainly includes three parts, namely: acquiring a ternary image, acquiring an edge line from the ternary image, and performing a pairing process based on the edge line. These three parts are described in detail below.
  • Part 1 From image acquisition to ternary image
  • Fig. 2 is a schematic diagram of the flow from image collection to ternary image in an embodiment of the present invention. As shown in Figure 2, the process can include the following steps:
  • the analysis area is determined, including the sub-sampling process.
  • the size of the analysis area should ensure that the target can be covered. Moreover, the sub-sampling is performed because the current image has many pixels, exceeding 1 million pixels, and the calculation points can be reduced by sub-sampling. After sub-sampling, the analysis area will have approximately 100 ⁇ 250 pixels.
  • smoothing is a differential preprocessing process
  • Sobel is used as a differential filter
  • peripheral processing is necessary for peripheral pixels, because they become the only pixels in matrix operations
  • peripheral processing excludes/processes peripheral pixels for subsequent calculations.
  • the following smoothing filter is applied to the image in the analysis area:
  • S260 Set a threshold to obtain a ternary image, wherein the threshold is set for the positive/negative difference image respectively.
  • step S250 the image processed in step S250 is truncated by the threshold threshold to form a ternary image:
  • aaa (m, n) and ttt (m, n) are the difference image and the ternary image respectively.
  • Fig. 3 is a schematic diagram of a flow from a ternary image to an edge line in an embodiment of the present invention. As shown in Figure 3, the process can include the following steps:
  • step S310 a positive edge or a negative edge is selected in the ternary image.
  • the selection of a positive edge or a negative edge is determined according to the connection or overlap between the target edge and the edge of the object, and which edge is connected or overlapped with the edge of the object is selected.
  • Step S320 edge grouping.
  • Step S330 Narrowing, which is a conventional technique, helps to reduce edge points.
  • the edges of the target should be narrow, because wide edges are not necessary, and calculations are increased. Therefore, it is necessary to narrow the edges.
  • the narrowing of the edge can be performed in the horizontal direction.
  • Step S340 the groups are sorted.
  • Sorting is to renumber the groups in a way that the number of edge points in the group decreases, which also helps to reduce the edge points. For example, groups with few points can be deleted.
  • Step S350 Hough transform.
  • step S360 the line estimates and obtains the edge points corresponding to each line, and deletes inappropriate edge points to obtain the final edge line.
  • FIGS. 4(a) and 4(b) are exemplary diagrams of the pairing process in the embodiment of the present invention.
  • the line estimation in the second part finally creates the positive and negative edge lines shown in Figure 4(a), where P represents the positive edge line posi and N represents the negative edge line nega.
  • the pairing between positive and negative edge lines is as follows:
  • target 1 P3, N1
  • target 2 P4, N2
  • Each group initially grouped has 30 edge points.
  • target 1 has 24 edge points
  • target 2 has 27 edge points.
  • FIG. 5 is a schematic flowchart of an image processing method for target detection according to an embodiment of the present invention. As shown in FIG. 5, the image processing method for target detection in the embodiment of the present invention may include the following steps:
  • Step S510 Determine an analysis area that can cover the target in the image captured by the camera on the vehicle, and preprocess the image in the analysis area to obtain an edge point group for the target.
  • the pre-processing process can refer to the above description about Figure 2, Figure 3 and Figure 4(a) and Figure 4(b).
  • Step S520 In the same edge point group, if there is an edge between the peripheral edge line of the target and the peripheral edge line of the object outside the target (the "object outside the target” is collectively referred to as an object hereinafter) If the points are connected or overlapped, edge point separation processing is performed.
  • peripheral edge line of the object is longer than the peripheral edge line of the target, it can be considered that there is an edge point connection between the target and the object.
  • Figure 6(a) is the positive edge point group of the target obtained by image processing the traffic cone (circled in the figure)
  • Figure 6(b) is the image of the traffic cone
  • the negative edge point group of the processed target (circled in the figure).
  • the positive edge point group includes the edge points of the object.
  • Figure 6(c) is a target created after pairing. It can be seen that the positive edge of the target is connected or overlapped with the edge of the object in the background, so the created target is not ideal.
  • FIG. 7 is a schematic flowchart of edge point separation processing in an embodiment of the present invention. As shown in Figure 7, it may include the following steps:
  • Step S521 selecting edge points from the edge point group to form the first type and the second type respectively.
  • the first category includes edge points formed as a main group
  • the second category includes other edge points in the edge point group except the main group, wherein the edge points in the main group are The height in the image is smaller than the height of the target. It should be noted that since the edge point group is targeted at the target, and the edge points in the initial first category are all from the edge point group, it can be determined that the initial first category is a subset of the edge point group, or It is called a subset of the edge of the target.
  • selecting edge points from the edge point group to form the first category includes: starting from the bottom position of the edge point group, and sequentially selecting edges at the set height position in order of increasing height Points to form the first category, wherein the selected edge points need to meet the condition that the initial height is less than the height of the target.
  • the initial height of the first type in the image is a preset ratio of the height of the target, and the preset ratio is preferably two-thirds to ensure that sufficient target edge points are selected into the first type .
  • Step S522 Perform linear regression processing on the edge points in the main group to obtain a corresponding linear regression line.
  • Step S523 selecting edge points from the second category one by one, and calculating the deviation of the selected edge points with respect to the linear regression line.
  • n(i) and m(i) are the horizontal and vertical positions of the edge points in the second category, respectively, and i represents the number of the edge points in the second category.
  • selecting the edge points from the second category one by one includes: starting from the edge points at the lowest height position in the second category, selecting the edge points one by one for deviation calculation.
  • the first category and the second category can be referred to as the lower category and the upper category or the low category and the high category, for example, i is the middle height of the upper category
  • i+1, i+2... are the edge points selected at higher positions in sequence.
  • step S524 if the deviation is less than the preset standard deviation, the corresponding edge points in the second category are added to the main group.
  • this step S524 may further include: forming the unselected edge points in the second category and the edge points whose deviation is greater than or equal to the preset standard deviation into a secondary group; and discarding the secondary group.
  • this step S524 may further include: forming the unselected edge points in the second category and the edge points whose deviation is greater than or equal to the preset standard deviation into a secondary group; and discarding the secondary group.
  • Want to group may further include: forming the unselected edge points in the second category and the edge points whose deviation is greater than or equal to the preset standard deviation into a secondary group.
  • Figure 8(b) is a schematic diagram of dividing the primary group and the secondary group.
  • the secondary groups generally belong to background objects and have nothing to do with the required lane information, so in some cases, The secondary group can be discarded.
  • Step S525 repeat the above processing (step S521-step S524) until the deviation is greater than or equal to the preset standard deviation to form the final main group.
  • Step S530 Create the target based on the edge points in the final main group for target detection.
  • first type and the second type in the embodiment of the present invention can be exchanged.
  • the machine-readable storage medium includes, but is not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), Read memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory (Flash Memory) or other memory technology, read-only compact disk read-only memory (CD-ROM), digital versatile disc (DVD) ) Or other optical storage, magnetic cassette tape, magnetic tape magnetic disk storage or other magnetic storage devices and other media that can store program codes.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM Read memory
  • EEPROM electrically erasable programmable read-only memory
  • flash Memory flash memory
  • CD-ROM compact disk read-only memory
  • DVD digital versatile disc
  • the machine-readable storage medium can be executed by the processor on the vehicle, and the processor can use the vehicle CAN bus to obtain the required vehicle information, lane line information, environmental information, etc. from the environmental perception part of the ADS to determine whether the vehicle is In the lane keeping state, the vehicle's normal lane changing state, or the vehicle's abnormal lane changing state, etc., and correspondingly execute the instructions stored in the machine-readable storage medium.
  • the processor can be the ECU (Electronic Control Unit) of the vehicle, or it can be a conventional controller independently configured in the camera, such as CPU, single-chip microcomputer, DSP (Digital Signal Processor), SOC ( System On a Chip, etc., and it is understandable that these independent controllers can also be integrated into the ECU.
  • the processor is preferably configured with a controller with a faster calculation speed and rich I/O port equipment, and requires an input and output port that can communicate with the entire vehicle CAN, an input and output port for switching signals, and a network cable interface.
  • the above-mentioned image processing method for target detection can be integrated into a vehicle processor (such as a camera processor) in the form of code.
  • a vehicle processor such as a camera processor
  • the image taken by the camera The connected or overlapping edge groups can be separated well, and the final detected target position and structure are more accurate.
  • an embodiment of the present invention also provides an image processing device for target detection.
  • Fig. 9 is a schematic structural diagram of an image processing device for target detection according to an embodiment of the present invention. As shown in FIG. 9, the image processing apparatus may include:
  • the image preprocessing module 910 is configured to determine an analysis area that can cover the target in an image captured by a camera on a vehicle, and preprocess the image in the analysis area to obtain an edge point group for the target.
  • the edge point separation module 920 is used to perform the following edge point separation processing if there is edge point connection or overlap between the peripheral edge line of the target and the peripheral edge line of the object outside the target in the same edge point group : Select edge points from the edge point group to form a first category and a second category, respectively, wherein the first category includes edge points formed as a main group, and the second category includes edge points in the edge point group.
  • edge points outside the main group wherein the height of the edge points in the main group in the image is less than the height of the target; linear regression processing is performed on the edge points in the main group to obtain the corresponding The linear regression line; select the edge points from the second category one by one, and calculate the deviation of the selected edge point relative to the linear regression line; if the deviation is less than the preset standard deviation, add the selected edge point To the main group; and repeat the above processing until the deviation is greater than or equal to the preset standard deviation to form the final main group.
  • the target creation module 930 is configured to create the target based on the edge points in the final main group for target detection.
  • the edge point separation module is configured to select edge points from the edge point group to form the first category including: starting from the bottom position of the edge point group, and sequentially in increasing order of height An edge point to a set height position is selected to form the first category, wherein the selected edge point needs to meet the condition that the initial height is smaller than the height of the target.
  • the edge point separation module is configured to select edge points from the second category one by one, including: starting from the edge points in the second category at the lowest height position, selecting edge points one by one Deviation calculation.
  • the edge point separation module is further configured to: form edge points that are not selected in the second category and edge points whose deviation is greater than or equal to the preset standard deviation as secondary Group; and discard the secondary group.
  • An embodiment of the present invention also provides a device, which includes the above-mentioned processor, a memory, and a program stored in the memory and capable of running on the processor, and the processor implements the above-mentioned image processing method when the program is executed.
  • the present application also provides a computer program product, which when executed on a vehicle, is suitable for executing a program initialized with the steps of the image processing method described above.
  • the embodiments of the present application can be provided as methods, systems, or computer program products. Therefore, the present application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • a computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及智能交通及图像处理技术领域,提供一种用于目标检测的图像处理方法及装置。该方法包括:进行图像预处理,以获取针对目标的边缘点组;在同一边缘点组中,若目标与物体的外围边缘线存在边缘点连接或重叠,则从边缘点组选择边缘点以形成第一类和第二类,其中第一类包括形成为主要组的高度小于目标高度的边缘点;对主要组中的边缘点进行线性回归处理以得到线性回归线;从第二类中逐个选择边缘点来计算其相对于线性回归线的偏差;若偏差小于预设标准偏差,则将该选择的边缘点添加至主要组以形成最终的主要组;以及基于最终的主要组中的边缘点创建目标。本发明成功地分离了图像中连接或重叠的边缘组。

Description

用于目标检测的图像处理方法及装置
相关申请的交叉引用
本申请要求2019年07月25日提交的中国专利申请201910677235.7的权益,该申请的内容通过引用被合并于本文。
技术领域
本发明涉及智能交通及图像处理技术领域,特别涉及一种用于目标检测的图像处理方法及装置。
背景技术
目前,具有AD(Autonomous driving,自主驾驶)功能或ADAS(Advanced Driver Assistance System,高级驾驶辅助系统)的车辆已开始逐步推向市场,极大地促进了智能交通的发展。
现有技术中,支持AD/ADAS的传感器主要有雷达、视觉相机系统(以下也称为摄像头)、激光雷达、超声波传感器等,其中视觉相机系统因能够获得与人类视觉一样的二维图像信息而应用最为广泛,其典型应用包括车道检测、物体检测、车辆检测、行人检测、骑车人检测等指定目标检测。在摄像头捕获图像后,针对图像进行边缘提取等处理,就可以提取出捕获的图像中的物体和环境信息。
但是,在利用摄像头进行目标检测时,受限于目前摄像头的性能,其对于背景中的目标与不需要的物体之间的边缘连接非常敏感,往往因物体与目标之间的边缘连接或重叠而导致检测错误。如图1所示,目标A是所需要检测的小目标(如交通锥),物体B是背景中不需要检测的对象,但目标A和物体B的边缘产生了连接或重叠(例如虚线所圈出的部分),且两者可能在色度上较为接近,从而经摄像头捕获的图像中很难将目标A和物体B区分开,造成了对目标A的识别的不准确。此外,在目标检测中,目标A或物体B还很有可能是具有自阴影的物体,例如太阳,而对应的物体B或目标A的边缘在阴影中可能会消失,从而更加加剧了分离目标A和物体B的边缘的 困难。
现有技术中有利用物体内容和外部的纹理、颜色等进行物体与目标的区分的方案,但该方案非常复杂,且明显不适用于纹理及颜色等特征相近的物体和目标。因此,目前没有有效的方法来分离图像中的目标与不需要的物体之间的边缘。
发明内容
有鉴于此,本发明旨在提出一种用于目标检测的图像处理方法,以至少部分地解决上述技术问题。
为达到上述目的,本发明的技术方案是这样实现的:
一种用于目标检测的图像处理方法,包括:
确定车辆上的摄像头捕获的图像中能覆盖所述目标的分析区域,并在所述分析区域中对所述图像进行预处理,以获取针对目标的边缘点组;
在同一边缘点组中,若所述目标的外围边缘线与所述目标之外的物体的外围边缘线存在边缘点连接或重叠,则进行以下的边缘点分离处理:从所述边缘点组选择边缘点以分别形成第一类和第二类,其中所述第一类包括形成为主要组的边缘点,所述第二类包括所述边缘点组中的除所述主要组之外的其他边缘点,其中所述主要组中的边缘点在所述图像中的高度小于所述目标的高度;对所述主要组中的边缘点进行线性回归处理以得到对应的线性回归线;从所述第二类中逐个选择边缘点,并计算该选择的边缘点相对于所述线性回归线的偏差;若所述偏差小于预设标准偏差,则将该选择的边缘点添加至所述主要组;以及重复上述处理,直到所述偏差大于或等于所述预设标准偏差,以形成最终的主要组;以及
基于所述最终的主要组中的边缘点创建所述目标以用于目标检测。
进一步地,从所述边缘点组选择边缘点以形成所述第一类包括:从所述边缘点组的底部位置开始,按高度增加的顺序依次选择至设定高度位置的边缘点以形成所述第一类,其中所选择的边缘点需要满足初始高度小于所述目标的高度的条件。
进一步地,所述第一类中的边缘点在所述图像中的初始高度为所述目标的高度的三分之二。
进一步地,所述从所述第二类中逐个选择边缘点包括:从所述第二类中的处于最低高度位置的边缘点开始,逐个选择边缘点进行偏差计算。
进一步地,所述图像处理方法还包括:将所述第二类中未被选择的边缘点及所述偏差大于或等于所述预设标准偏差的边缘点形成为次要组;以及丢弃所述次要组。
相对于现有技术,本发明所述的用于目标检测的图像处理方法具有以下优势:本发明实施例的方法成功地分离了车辆摄像头所拍摄的图像中连接或重叠的边缘组,实现了对目标更为准确的检测。
本发明的另一目的在于提出一种用于目标检测的图像处理装置,以至少部分地解决上述技术问题。
为达到上述目的,本发明的技术方案是这样实现的:
一种用于目标检测的图像处理装置,包括:
图像预处理模块,用于确定车辆上的摄像头捕获的图像中能覆盖所述目标的分析区域,并在所述分析区域中对所述图像进行预处理,以获取针对目标的边缘点组;
边缘点分离模块,用于在同一边缘点组中,若所述目标的外围边缘线与所述目标之外的物体的外围边缘线存在边缘点连接或重叠,则进行以下的边缘点分离处理:从所述边缘点组选择边缘点以分别形成第一类和第二类,其中所述第一类包括形成为主要组的边缘点,所述第二类包括所述边缘点组中的除所述主要组之外的其他边缘点,其中所述主要组中的边缘点在所述图像中的高度小于所述目标的高度;对所述主要组中的边缘点进行线性回归处理以得到对应的线性回归线;从所述第二类中逐个选择边缘点,并计算该选择的边缘点相对于所述线性回归线的偏差;若所述偏差小于预设标准偏差,则将该选择的边缘点添加至所述主要组;以及重复上述处理,直到所述偏差大于或等于所述预设标准偏差,以形成最终的主要组;以及
目标创建模块,用于基于所述最终的主要组中的边缘点创建所述 目标以用于目标检测。
进一步地,所述边缘点分离模块用于从所述边缘点组选择边缘点以形成所述第一类包括:从所述边缘点组的底部位置开始,按高度增加的顺序依次选择至设定高度位置的边缘点以形成所述第一类,其中所选择的边缘点需要满足初始高度小于所述目标的高度的条件。
进一步地,所述第一类中的边缘点在所述图像中的初始高度为所述目标的高度的三分之二。
进一步地,所述边缘点分离模块用于从所述第二类中逐个选择边缘点包括:从所述第二类中的处于最低高度位置的边缘点开始,逐个选择边缘点进行偏差计算。
进一步地,所述边缘点分离模块还用于:将所述第二类中未被选择的边缘点及所述偏差大于或等于所述预设标准偏差的边缘点形成为次要组;以及丢弃所述次要组。
所述图像处理装置与上述图像处理方法相对于现有技术所具有的优势相同,在此不再赘述。
本发明的另一目的在于提出一种机器可读存储介质及一种处理器,以至少部分地解决上述技术问题。
为达到上述目的,本发明的技术方案是这样实现的:
一种机器可读存储介质,该机器可读存储介质上存储有指令,该指令用于使得机器执行上述的用于目标检测的图像处理方法。
一种处理器,用于执行所述的机器可读存储介质中存储的指令。
所述机器可读存储介质及所述处理器与上述图像处理方法相对于现有技术所具有的优势相同,在此不再赘述。
本发明的其它特征和优点将在随后的具体实施方式部分予以详细说明。
附图说明
构成本发明的一部分的附图用来提供对本发明的进一步理解,本发明的示意性实施方式及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是物体与目标之间的边缘连接或重叠而导致目标检测错误 的示意图;
图2是本发明实施例中从图像采集到三元图像的流程示意图;
图3是本发明实施例中从三元图像至线段的流程示意图;
图4(a)和图4(b)是本发明实施例中配对过程的示例图;
图5是本发明实施例的用于目标检测的图像处理方法的流程示意图;
图6(a)是本发明实施例的示例中对交通锥进行图像处理得到的目标的正边缘组的示意图,图6(b)是本发明实施例的示例中对交通锥进行图像处理得到的目标的负边缘组的示意图,图6(c)是该示例中进行配对后创建的目标的示意图;
图7是本发明实施例中边缘点分离处理的流程示意图;
图8(a)是本发明实施例的示例中对正边缘组进行分类的示意图,图8(b)是本发明实施例的示例中划分主要组和次要组的示意图,图8(c)是该示例中利用主要组进行配对后创建的目标的示意图;以及
图9是本发明实施例的用于目标检测的图像处理装置的结构示意图。
附图标记说明:
910、图像预处理模块;920、边缘点分离模块;930、目标创建模块。
具体实施方式
需要说明的是,在不冲突的情况下,本发明中的实施方式及实施方式中的特征可以相互组合。
另外,在本发明的实施方式中所提到的目标包括车道线以及位于前车摄像头的摄像范围内的行人、车辆、道路标识、路障设备(如交通锥)等。另外,本发明实施例中,“边缘点组”、“边缘线”及“边缘”在相应场景下可互换理解。
下面将参考附图并结合实施方式来详细说明本发明。
为了能够更为清楚地介绍本发明实施例方案,在此先对自主驾驶 中的目标检测进行介绍。
本发明实施例的目标检测过程主要包括三个部分,即:获取三元图像、从三元图像获取边缘线以及基于边缘线进行配对过程。下面具体介绍这三个部分。
1)第一部分:从图像采集到三元图像
图2是本发明实施例中从图像采集到三元图像的流程示意图。如图2所示,该流程可以包括以下步骤:
S210,图像获取,即通过摄像捕获图像。
S220,颜色选择,选择红色,绿色和蓝色中的一种。
S230,分析区域确定,包括子采样过程。
其中,分析区域的大小应保证能覆盖目标。并且,进行子采样是因为当前图像具有许多像素,超过100万像素,而通过子采样可以减少计算点。在子采样之后,分析区域将具有大约100×250像素。
S240,缩小/子采样图像,其中缩小/子采样图像是分析区域和子采样的结果。
S250,平滑、差分(differential)处理、外围处理图像。
其中,平滑是差分的预处理过程,Sobel用作差分滤波器,外围处理对于外围像素是必要的,因为它们在矩阵运算中成为唯一的像素,且外围处理排除/处理外围像素以供后续计算。
举例而言,在进行差分处理之前,将以下的平滑滤波器应用于分析区域中的图像:
Figure PCTCN2020103849-appb-000001
然后,再应用下面的Sobel滤波器:
Figure PCTCN2020103849-appb-000002
S260,设置阈值得到三元图像,其中针对正/负差分图像分别设置阈值。
举例而言,经步骤S250处理后的图像通过阈值threshold被截断以形成三元图像:
aaa(m,n)<-thresholdttt(m,n)=-1,对应三元图像的负边缘
aaa(m,n)>+thresholdttt(m,n)=1,对应三元图像的正边缘
其他,ttt(m,n)=0
其中,aaa(m,n)和ttt(m,n)分别是差分图像和三元图像。
2)第二部分:从三元图像到边缘线
图3是本发明实施例中从三元图像至边缘线的流程示意图。如图3所示,该流程可以包括以下步骤:
步骤S310,在三元图像中选择正边缘或负边缘。
本发明实施例中,选择正边缘还是负边缘要根据目标边缘与物体的边缘的连接或重叠情况来判断,哪一个边缘与物体边缘发生了连接或重叠,就选择哪一个边缘。
步骤S320,边缘分组。
步骤S330,边缘变窄(Narrowing),其属于常规技术,有助于减小边缘点。
在目标检测中,目标的边缘应该是窄的,因为宽边缘不是必需的,并且增加了计算。因此,对边缘进行变窄是必要的。本发明实施例中,可使边缘的变窄将沿水平方向进行。
步骤S340,组排序。
排序是以分组中边缘点的数量递减的方式对分组进行重新编号,也有助于减少边缘点,例如可以删除点数很少的分组。
步骤S350,霍夫(Hough)变换。
步骤S360,线估计以及获取对应于每一线的边缘点,并删除不合适的边缘点,得到最终的边缘线。
可理解的是,这其中的部分步骤,例如Hough变换,对于车道检测和目标检测都是常规的,可参考现有技术进行理解,故本文对其具体实施细节不进行赘述。
3)第三部分:配对过程
举例而言,当目标为交通锥时,配对过程就是根据边缘线估计交通锥的锥体。图4(a)和图4(b)是本发明实施例中配对过程的示例图。第二部分中的线估计最终创建了如图4(a)所示的正负边缘线,其中P表示正边缘线posi,N表示负边缘线nega。要创建目标,正边缘线和负边缘线之间的配对如下:
(posi,nega)=(P1,N1),(P1,N2),
(P2,N1),(P2,N2),
(P3,N1),(P3,N2),
(P4,N1),(P4,N2)
即,创建了八对,可通过一些标准来评估其中的每一对。本发明实施例中可使用以下标准:
a)平均高度位置:<阈值;
b)两段的垂直重叠:较大的重叠有更多的点;
c)边缘线的距离:接近20[cm]有更多的点;
d)边缘线的角度:边缘线的角度总和越近,对应的边缘点越大。
如此,通过评估,在图4(b)中,得到了目标1(P3,N1)和目标2(P4,N2),其中最初分组的每组有30个边缘点,经匹配后,目标1有24个边缘点,目标2有27个边缘点。
基于上述目标检测过程,下面详细介绍本发明实施例的用于目标检测的图像处理方法。
图5是本发明实施例的用于目标检测的图像处理方法的流程示意图。如图5所示,本发明实施例的用于目标检测的图像处理方法可以包括以下步骤:
步骤S510,确定车辆上的摄像头捕获的图像中能覆盖所述目标的分析区域,并在所述分析区域中对所述图像进行预处理,以获取针对目标的边缘点组。
举例而言,所述目标为交通锥时,假设分析区域约为1.6×4[m],因为交通锥的大小是0.2~0.3(直径)×0.7(高度)[m]。另外,预处理过程可参考上述关于图2、图3及图4(a)和图4(b)的描述。
步骤S520,在同一边缘点组中,若所述目标的外围边缘线与所述目标之外的物体(对该“所述目标之外的物体”以下都统称为物体)的外围边缘线存在边缘点连接或重叠,则进行边缘点分离处理。
事实上,若物体的外围边缘线长于所述目标的外围边缘线,则可以认为目标与物体之间存在边缘点连接。
其中,继续以交通锥为例,图6(a)是对交通锥进行图像处理得到的目标的正边缘点组(在图中通过圆圈圈出),图6(b)是对 交通锥进行图像处理得到的目标的负边缘点组(在图中通过圆圈圈出)。理想的情况下,会希望目标的外围边缘和物体的外围边缘属于不同的边缘点组,但避免不了因为边缘线的连接或重叠而被并成单个组,如图6(a)所示,其正边缘点组中就包括有物体的边缘点。如此,基于图6(a)和图6(b),不能创建具有正边缘和负边缘的“理想目标”。例如图6(c)是配对后创建的目标,可知目标的正边缘与背景中物体的边缘发生了连接或重叠,故该创建的目标并不理想。
对此,图7是本发明实施例中边缘点分离处理的流程示意图。如图7所示,其可以包括以下步骤:
步骤S521,将从所述边缘点组选择边缘点以分别形成第一类和第二类。
其中所述第一类包括形成为主要组的边缘点,所述第二类包括所述边缘点组中的除所述主要组之外的其他边缘点,其中所述主要组中的边缘点在所述图像中的高度小于所述目标的高度。需说明的是,因边缘点组是针对目标的,而初始的第一类中的边缘点都来自于所述边缘点组,则可确定初始的第一类是边缘点组的子集,或称为是目标的边缘的子集。
在优选的实施例中,从所述边缘点组选择边缘点以形成所述第一类包括:从所述边缘点组的底部位置开始,按高度增加的顺序依次选择至设定高度位置的边缘点以形成所述第一类,其中所选择的边缘点需要满足初始高度小于所述目标的高度的条件。
对应于图6(a)-图6(c)的示例,即是将图6(a)对应的正边缘点组分为第一类和第二类,如图8(a)所示,其中的实心叉符表示目标的外围边缘线的边缘点,对应为第一类,空心叉符表示物体的外围边缘线的边缘点,对应为第二类。
其中,第一类在所述图像中的初始高度为所述目标的高度的预设比例,该预设比例优选为三分之二,以保证有足够的目标边缘点被选入第一类中。
步骤S522,对所述主要组中的边缘点进行线性回归处理以得到对应的线性回归线。
其中,线性回归处理是常规数学算法,从而可得到线性回归线n=c1*m+c0和标准线σ,其中c1、c0为线性回归线的系数,n表示边缘点垂直位置,m表示边缘点水平位置。
步骤S523,从所述第二类中逐个选择边缘点,并计算该选择的边缘点相对于所述线性回归线的偏差。
具体地,对应于上文中n=c1*m+c0的线性回归线,计算公式如下:
Δn(i)=n(i)-(m(i)+c0)
其中,n(i)、m(i)分别是第二类中边缘点的水平位置和垂直位置,且,i表示第二类中边缘点的编号。优选地,从所述第二类中逐个选择边缘点包括:从所述第二类中的处于最低高度位置的边缘点开始,逐个选择边缘点进行偏差计算。
本发明实施例中,当目标与物体在图像上呈上下位置关系时,第一类和第二类可分别称为下类和上类或者是低类和高类,例如i是上类中高度位置最低的边缘点,而i+1、i+2……为依次更高位置选择的边缘点。
步骤S524,若所述偏差小于预设标准偏差,则将对应的第二类中的边缘点添加至所述主要组。
优选地,该步骤S524还可包括:将所述第二类中未被选择的边缘点及所述偏差大于或等于所述预设标准偏差的边缘点形成为次要组;以及丢弃所述次要组。
举例而言,假设第二类中有三个边缘点满足偏差的条件,取它们的平均值:
ΔN(i)=(Δn(i)+Δn(i+1)+Δn(i+2))/3
继续直到ΔN(i)<1.3σ,其中系数1.3即为上述的设定倍数,1.3σ即为预设标准偏差,ΔN(i)≥1.3σ的边缘点进入次要组。
据此,对应于上类和下类的情形,最终得到的边缘点坐标(m(i),n(i))将是想要的上限边缘,低于该上限边缘的边缘点为主要组(也可理解为目标组),高于该上限边缘的边缘点为次要组。对应于图8(a),图8(b)是划分主要组和次要组的示意图,其中次要组一般 都是属于背景物体,与需得到的车道信息等无关,从而在一些情况下,可丢弃所述次要组。
步骤S525,重复上述处理(步骤S521-步骤S524),直到所述偏差大于或等于所述预设标准偏差,以形成最终的主要组。
据此,得到了适用于创建目标的主要组。
步骤S530,基于所述最终的主要组中的边缘点创建所述目标以用于目标检测。
对应于图8(a)和图8(b),采用最终获取的主要组的边缘点创建的目标如图8(c)所示,相比于图6(c),易知其分离了干扰物体的边缘点,使得最终检测的目标相比于实际目标更为准确。
另外,本领域技术人员可理解的,本发明实施例中的第一类和第二类是可以交换的。
本发明另一实施例还提出了一种机器可读存储介质,该机器可读存储介质上存储有指令,该指令用于使得机器执行上述的用于目标检测的图像处理方法。其中,所述机器可读存储介质包括但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体(Flash Memory)或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备等各种可以存储程序代码的介质。
其中,该机器可读存储介质可以由车辆上的处理器执行,该处理器可利用车辆CAN总线等从ADS的环境感知部分等获取所需要的车辆信息、车道线信息、环境信息等判断车辆是否处于车道保持状态、车辆正常变道状态或车辆异常变道状态等,并对应执行所述机器可读存储介质中存储的指令。
其中,处理器可以是车辆的ECU(Electronic Control Unit,电子控制单元),也可以是独立配置于摄像头的常规控制器,如CPU、单片机、DSP(Digital Signal Processor,数字信号处理器)、SOC(System On a Chip,片上系统)等,且可以理解,这些独立控制器也可以集成 至ECU中。处理器优选采用运算速度较快且有着丰富的I/O口设备的控制器来进行配置,要求具有能与整车CAN通信的输入输出端口、开关信号的输入输出端口、网线接口等。
据此,上述用于目标检测的图像处理方法可通过代码形式集成至车辆的处理器(例如摄像头处理器)中,通过实验,在利用本发明实施例的图像处理方法之后,摄像头所拍摄的图像中连接或重叠的边缘组能够被很好地分离,且最终检测出的目标的位置、结构等都更为准确。
基于与上述用于目标检测的图像处理方法相同的发明思路,本发明实施例还提供了一种用于目标检测的图像处理装置。图9是本发明实施例的用于目标检测的图像处理装置的结构示意图。如图9所示,该图像处理装置可以包括:
图像预处理模块910,用于确定车辆上的摄像头捕获的图像中能覆盖所述目标的分析区域,并在所述分析区域中对所述图像进行预处理,以获取针对目标的边缘点组。
边缘点分离模块920,用于在同一边缘点组中,若所述目标的外围边缘线与所述目标之外的物体的外围边缘线存在边缘点连接或重叠,则进行以下的边缘点分离处理:从所述边缘点组选择边缘点以分别形成第一类和第二类,其中所述第一类包括形成为主要组的边缘点,所述第二类包括所述边缘点组中的除所述主要组之外的其他边缘点,其中所述主要组中的边缘点在所述图像中的高度小于所述目标的高度;对所述主要组中的边缘点进行线性回归处理以得到对应的线性回归线;从所述第二类中逐个选择边缘点,并计算该选择的边缘点相对于所述线性回归线的偏差;若所述偏差小于预设标准偏差,则将该选择的边缘点添加至所述主要组;以及重复上述处理,直到所述偏差大于或等于所述预设标准偏差,以形成最终的主要组。
目标创建模块930,用于基于所述最终的主要组中的边缘点创建所述目标以用于目标检测。
在优选的实施例中,所述边缘点分离模块用于从所述边缘点组选择边缘点以形成所述第一类包括:从所述边缘点组的底部位置开始, 按高度增加的顺序依次选择至设定高度位置的边缘点以形成所述第一类,其中所选择的边缘点需要满足初始高度小于所述目标的高度的条件。
在优选的实施例中,所述边缘点分离模块用于从所述第二类中逐个选择边缘点包括:从所述第二类中的处于最低高度位置的边缘点开始,逐个选择边缘点进行偏差计算。
在优选的实施例中,所述边缘点分离模块还用于:将所述第二类中未被选择的边缘点及所述偏差大于或等于所述预设标准偏差的边缘点形成为次要组;以及丢弃所述次要组。
本发明实施例还提供了一种设备,设备包括上述的处理器、存储器及存储在存储器上并可在处理器上运行的程序,处理器执行程序时实现上述图像处理方法。
本申请还提供了一种计算机程序产品,当在车辆上执行时,适于执行初始化有上述利用图像处理方法的步骤的程序。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
需说明的是,本发明实施例所述的图像处理装置与上述关于图像处理方法的实施例的实施细节及效果相同或相似,故在此不再赘述。
以上所述仅为本发明的较佳实施方式而已,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (12)

  1. 一种用于目标检测的图像处理方法,其特征在于,所述图像处理方法包括:
    确定车辆上的摄像头捕获的图像中能覆盖所述目标的分析区域,并在所述分析区域中对所述图像进行预处理,以获取针对目标的边缘点组;
    在同一边缘点组中,若所述目标的外围边缘线与所述目标之外的物体的外围边缘线存在边缘点连接或重叠,则进行以下的边缘点分离处理:
    从所述边缘点组选择边缘点以分别形成第一类和第二类,其中所述第一类包括形成为主要组的边缘点,所述第二类包括所述边缘点组中的除所述主要组之外的其他边缘点,其中所述主要组中的边缘点在所述图像中的高度小于所述目标的高度;
    对所述主要组中的边缘点进行线性回归处理以得到对应的线性回归线;
    从所述第二类中逐个选择边缘点,并计算该选择的边缘点相对于所述线性回归线的偏差;
    若所述偏差小于预设标准偏差,则将该选择的边缘点添加至所述主要组;以及
    重复上述处理,直到所述偏差大于或等于所述预设标准偏差,以形成最终的主要组;以及
    基于所述最终的主要组中的边缘点创建所述目标以用于目标检测。
  2. 根据权利要求1所述的用于目标检测的图像处理方法,其特征在于,从所述边缘点组选择边缘点以形成所述第一类包括:
    从所述边缘点组的底部位置开始,按高度增加的顺序依次选择至设定高度位置的所述边缘点以形成所述第一类,其中所选择的边缘点需要满足初始高度小于所述目标的高度的条件。
  3. 根据权利要求1所述的用于目标检测的图像处理方法,其特征在于,所述第一类中的边缘点在所述图像中的初始高度为所述目标的高度的三分之二。
  4. 根据权利要求1所述的用于目标检测的图像处理方法,其特征在于,所述从所述第二类中逐个选择边缘点包括:
    从所述第二类中的处于最低高度位置的边缘点开始,逐个选择边缘点进行偏差计算。
  5. 根据权利要求1所述的用于目标检测的图像处理方法,其特征在于,所述图像处理方法还包括:
    将所述第二类中未被选择的边缘点及所述偏差大于或等于所述预设标准偏差的边缘点形成为次要组;以及
    丢弃所述次要组。
  6. 一种用于目标检测的图像处理装置,其特征在于,所述图像处理装置包括:
    图像预处理模块,用于确定车辆上的摄像头捕获的图像中能覆盖所述目标的分析区域,并在所述分析区域中对所述图像进行预处理,以获取针对目标的边缘点组;
    边缘点分离模块,用于在同一边缘点组中,若所述目标的外围边缘线与所述目标之外的物体的外围边缘线存在边缘点连接或重叠,则进行以下的边缘点分离处理:
    从所述边缘点组选择边缘点以分别形成第一类和第二类,其中所述第一类包括形成为主要组的边缘点,所述第二类包括所述边缘点组中的除所述主要组之外的其他边缘点,其中所述主要组中的边缘点在所述图像中的高度小于所述目标的高度;
    对所述主要组中的边缘点进行线性回归处理以得到对应的线性回归线;
    从所述第二类中逐个选择边缘点,并计算该选择的边缘点相对于所述线性回归线的偏差;
    若所述偏差小于预设标准偏差,则将该选择的边缘点添加至所述主要组;以及
    重复上述处理,直到所述偏差大于或等于所述预设标准偏差,以形成最终的主要组;以及
    目标创建模块,用于基于所述最终的主要组中的边缘点创建所述目标以用于目标检测。
  7. 根据权利要求6所述的用于目标检测的图像处理装置,其特征在于,所述边缘点分离模块用于从所述边缘点组选择边缘点以形成所述第一类包括:
    从所述边缘点组的底部位置开始,按高度增加的顺序依次选择至设定高度位置的所述边缘点以形成所述第一类,其中所选择的边缘点需要满足初始高度小于所述目标的高度的条件。
  8. 根据权利要求6所述的用于目标检测的图像处理装置,其特 征在于,所述第一类中的边缘点在所述图像中的初始高度为所述目标的高度的三分之二。
  9. 根据权利要求6所述的用于目标检测的图像处理装置,其特征在于,所述边缘点分离模块用于从所述第二类中逐个选择边缘点包括:
    从所述第二类中的处于最低高度位置的边缘点开始,逐个选择边缘点进行偏差计算。
  10. 根据权利要求6所述的用于目标检测的图像处理装置,其特征在于,所述边缘点分离模块还用于:
    将所述第二类中未被选择的边缘点及所述偏差大于或等于所述预设标准偏差的边缘点形成为次要组;以及
    丢弃所述次要组。
  11. 一种机器可读存储介质,该机器可读存储介质上存储有指令,该指令用于使得机器执行权利要求1至5中任意一项所述的用于目标检测的图像处理方法。
  12. 一种处理器,用于执行权利要求11所述的机器可读存储介质中存储的指令。
PCT/CN2020/103849 2019-07-25 2020-07-23 用于目标检测的图像处理方法及装置 WO2021013227A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20844715.1A EP3979196A4 (en) 2019-07-25 2020-07-23 IMAGE PROCESSING METHOD AND APPARATUS FOR TARGET DETECTION

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910677235.7 2019-07-25
CN201910677235.7A CN111539907B (zh) 2019-07-25 2019-07-25 用于目标检测的图像处理方法及装置

Publications (1)

Publication Number Publication Date
WO2021013227A1 true WO2021013227A1 (zh) 2021-01-28

Family

ID=71951959

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/103849 WO2021013227A1 (zh) 2019-07-25 2020-07-23 用于目标检测的图像处理方法及装置

Country Status (3)

Country Link
EP (1) EP3979196A4 (zh)
CN (1) CN111539907B (zh)
WO (1) WO2021013227A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822204A (zh) * 2021-09-26 2021-12-21 中国科学院空天信息创新研究院 一种桥梁目标检测中桥梁边缘线的精确拟合方法
CN115082775A (zh) * 2022-07-27 2022-09-20 中国科学院自动化研究所 基于图像分块的超分辨率增强小目标检测方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096145B (zh) * 2021-03-29 2024-05-14 毫末智行科技有限公司 基于霍夫变换及线性回归的目标边界检测方法及装置
CN113658252B (zh) * 2021-05-17 2024-08-23 毫末智行科技有限公司 用于估计摄像头仰角的方法、介质、装置及该摄像头
CN113506237B (zh) * 2021-05-17 2024-05-24 毫末智行科技有限公司 用于确定对象的边界的方法及对象检测方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07200997A (ja) * 1993-12-29 1995-08-04 Nissan Motor Co Ltd 車両用環境認識装置
US20050273260A1 (en) * 2004-06-02 2005-12-08 Toyota Jidosha Kabushiki Kaisha Lane boundary detector
CN105139391A (zh) * 2015-08-17 2015-12-09 长安大学 一种雾霾天气交通图像边缘检测方法
CN105320940A (zh) * 2015-10-08 2016-02-10 江苏科技大学 一种基于中心-边界连接模型的交通标志检测方法
CN106203398A (zh) * 2016-07-26 2016-12-07 东软集团股份有限公司 一种检测车道边界的方法、装置和设备
CN109389024A (zh) * 2018-01-30 2019-02-26 长城汽车股份有限公司 基于影像识别路锥的方法、装置、存储介质以及车辆
CN109902806A (zh) * 2019-02-26 2019-06-18 清华大学 基于卷积神经网络的噪声图像目标边界框确定方法
CN109993046A (zh) * 2018-06-29 2019-07-09 长城汽车股份有限公司 基于视觉摄像机的自阴影物体边缘识别方法、装置及车辆

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1380543A (zh) * 2001-04-12 2002-11-20 清华大学 一种工业辐射成像中的图像分割识别方法
CN101383005B (zh) * 2007-09-06 2012-02-15 上海遥薇(集团)有限公司 一种利用辅助规则纹理的乘客目标图像和背景分离方法
JP2012155612A (ja) * 2011-01-27 2012-08-16 Denso Corp 車線検出装置
CN102842037A (zh) * 2011-06-20 2012-12-26 东南大学 一种基于多特征融合的车辆阴影消除方法
CN103310218B (zh) * 2013-05-21 2016-08-10 常州大学 一种重叠遮挡果实精确识别方法
CN103839279A (zh) * 2014-03-18 2014-06-04 湖州师范学院 一种目标检测中基于vibe的粘连目标分割方法
CN105913415B (zh) * 2016-04-06 2018-11-30 博众精工科技股份有限公司 一种具有广泛适应性的图像亚像素边缘提取方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07200997A (ja) * 1993-12-29 1995-08-04 Nissan Motor Co Ltd 車両用環境認識装置
US20050273260A1 (en) * 2004-06-02 2005-12-08 Toyota Jidosha Kabushiki Kaisha Lane boundary detector
CN105139391A (zh) * 2015-08-17 2015-12-09 长安大学 一种雾霾天气交通图像边缘检测方法
CN105320940A (zh) * 2015-10-08 2016-02-10 江苏科技大学 一种基于中心-边界连接模型的交通标志检测方法
CN106203398A (zh) * 2016-07-26 2016-12-07 东软集团股份有限公司 一种检测车道边界的方法、装置和设备
CN109389024A (zh) * 2018-01-30 2019-02-26 长城汽车股份有限公司 基于影像识别路锥的方法、装置、存储介质以及车辆
CN109993046A (zh) * 2018-06-29 2019-07-09 长城汽车股份有限公司 基于视觉摄像机的自阴影物体边缘识别方法、装置及车辆
CN109902806A (zh) * 2019-02-26 2019-06-18 清华大学 基于卷积神经网络的噪声图像目标边界框确定方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YONG HUANG; JIANRU XUE: "Real-time traffic cone detection for autonomous vehicle", 2015 34TH CHINESE CONTROL CONFERENCE (CCC), TECHNICAL COMMITTEE ON CONTROL THEORY, CHINESE ASSOCIATION OF AUTOMATION, 28 July 2015 (2015-07-28), pages 3718 - 3722, XP033201805, DOI: 10.1109/ChiCC.2015.7260215 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822204A (zh) * 2021-09-26 2021-12-21 中国科学院空天信息创新研究院 一种桥梁目标检测中桥梁边缘线的精确拟合方法
CN115082775A (zh) * 2022-07-27 2022-09-20 中国科学院自动化研究所 基于图像分块的超分辨率增强小目标检测方法

Also Published As

Publication number Publication date
CN111539907B (zh) 2023-09-12
EP3979196A1 (en) 2022-04-06
CN111539907A (zh) 2020-08-14
EP3979196A4 (en) 2022-07-27

Similar Documents

Publication Publication Date Title
WO2021013227A1 (zh) 用于目标检测的图像处理方法及装置
WO2022126377A1 (zh) 检测车道线的方法、装置、终端设备及可读存储介质
CN112528878B (zh) 检测车道线的方法、装置、终端设备及可读存储介质
Kim et al. An Efficient Color Space for Deep‐Learning Based Traffic Light Recognition
WO2019228211A1 (zh) 基于车道线的智能驾驶控制方法和装置、电子设备
Kong et al. Vanishing point detection for road detection
WO2020103893A1 (zh) 车道线属性检测方法、装置、电子设备及可读存储介质
EP2811423B1 (en) Method and apparatus for detecting target
WO2022134996A1 (en) Lane line detection method based on deep learning, and apparatus
CN105989334B (zh) 基于单目视觉的道路检测方法
WO2020258703A1 (zh) 障碍物检测方法、智能驾驶控制方法、装置、介质及设备
CN115049700A (zh) 一种目标检测方法及装置
CN110163039B (zh) 判定车辆行驶状态的方法、设备、存储介质以及处理器
CN111382658B (zh) 一种基于图像灰度梯度一致性的自然环境下道路交通标志检测方法
CN110544268B (zh) 一种基于结构光及SiamMask网络的多目标跟踪方法
CN113673584A (zh) 一种图像检测方法及相关装置
CN114898321B (zh) 道路可行驶区域检测方法、装置、设备、介质及系统
CN112699711B (zh) 车道线检测方法、装置、存储介质及电子设备
CN116469073A (zh) 目标识别方法、装置、电子设备、介质及自动驾驶车辆
WO2020001631A1 (zh) 基于视觉摄像机的自阴影物体边缘识别方法、装置及车辆
CN115359453A (zh) 一种基于三维点云的路面障碍物检测方法及系统
WO2019149213A1 (zh) 基于影像识别路锥的方法、装置、存储介质以及车辆
CN111241911B (zh) 一种自适应的车道线检测方法
JP2011150627A (ja) 通路検出方法、装置、及びプログラム
CN114898306B (zh) 一种检测目标朝向的方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20844715

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020844715

Country of ref document: EP

Effective date: 20211228

NENP Non-entry into the national phase

Ref country code: DE