WO2023024516A1 - 一种碰撞预警的方法、装置、电子设备及存储介质 - Google Patents
一种碰撞预警的方法、装置、电子设备及存储介质 Download PDFInfo
- Publication number
- WO2023024516A1 WO2023024516A1 PCT/CN2022/084366 CN2022084366W WO2023024516A1 WO 2023024516 A1 WO2023024516 A1 WO 2023024516A1 CN 2022084366 W CN2022084366 W CN 2022084366W WO 2023024516 A1 WO2023024516 A1 WO 2023024516A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- lane line
- target image
- lane
- target
- fitting
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000001514 detection method Methods 0.000 claims abstract description 34
- 230000011218 segmentation Effects 0.000 claims description 20
- 230000006870 function Effects 0.000 claims description 15
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims description 14
- 240000004050 Pentaglottis sempervirens Species 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 230000001419 dependent effect Effects 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims 1
- 230000008569 process Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000002776 aggregation Effects 0.000 description 3
- 238000004220 aggregation Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 206010041349 Somnolence Diseases 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Definitions
- the present disclosure relates to the technical field of image processing, and in particular, to a collision warning method, device, electronic equipment, and storage medium.
- ADAS Advanced Driver Assistance Systems
- the forward collision warning function avoids potential collision risks by sensing the vehicle in front and calculating the collision time. Forward collision warning can improve driving safety, especially when the driver is distracted, tired and sleepy, it can play a great role.
- the related vehicle forward collision warning method mainly calculates the relative collision time based on the relative position and velocity relationship of the vehicle, so as to determine whether to issue an alarm.
- false positives often occur. report.
- Embodiments of the present disclosure at least provide a collision warning method, device, electronic equipment, and storage medium.
- an embodiment of the present disclosure provides a method for collision warning, the method comprising: acquiring a target image captured by a camera device installed on a vehicle; performing target detection on the target image; Based on the position information of each pixel point on the target image, curve fitting is performed on the lane lines in the target image to obtain a fitting curve representing the position of each lane line in the target image; based on the detected target object in the The location information in the target image and the fitting curves of each lane line are used to issue a collision warning to the vehicle.
- the embodiment of the present disclosure also provides a collision warning device, the device includes: an acquisition module, used to acquire the target image collected by the camera device installed on the vehicle; a detection module, used to detect the target image Carrying out target detection; a fitting module, used for performing curve fitting on the lane lines in the target image based on the detected position information of each pixel point belonging to the lane line, to obtain the representation of each lane line in the target A fitting curve of the position in the image; an early warning module, configured to issue a collision warning to the vehicle based on the detected position information of the target object in the target image and the fitting curve of each lane line.
- an acquisition module used to acquire the target image collected by the camera device installed on the vehicle
- a detection module used to detect the target image Carrying out target detection
- a fitting module used for performing curve fitting on the lane lines in the target image based on the detected position information of each pixel point belonging to the lane line, to obtain the representation of each lane line in the target A fitting curve of the position in the image
- an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the The processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the steps of the collision warning method described in any one of the first aspect and its various implementation modes are executed.
- the embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, and the computer program is executed when the processor runs, as in the first aspect and its various implementation modes The steps of any one of the collision warning methods.
- FIG. 1 shows a flow chart of a collision warning method provided by an embodiment of the present disclosure
- Fig. 2 shows a schematic diagram of the application of a collision warning method provided by an embodiment of the present disclosure
- Fig. 3 shows a schematic diagram of a collision warning device provided by an embodiment of the present disclosure
- Fig. 4 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
- the relevant vehicle forward collision warning method is mainly based on the relative position and speed relationship of the vehicle to calculate the relative collision time, so as to determine whether to issue an alarm.
- false positives often occur. report.
- the present disclosure provides a collision warning method, device, electronic equipment and storage medium, and the detection accuracy is high.
- the execution subject of the method for early warning of collision provided by the embodiment of the present disclosure is generally a computer device with certain computing power.
- the computer equipment includes, for example: terminal equipment or server or other processing equipment, and the terminal equipment can be user equipment (User Equipment, UE), mobile equipment, cellular phone, cordless telephone, personal digital assistant (Personal Digital Assistant, PDA), handheld device , computing devices, in-vehicle devices, wearable devices, etc.
- the collision warning method may be implemented by a processor calling computer-readable instructions stored in a memory.
- Fig. 1 is a flowchart of a collision warning method provided by an embodiment of the present disclosure, the method includes steps S101 to S104, wherein:
- S102 Perform target detection on the target image
- S104 Based on the detected position information of the target object in the target image and the fitting curve of each lane line, issue a collision warning.
- the above collision warning method can be mainly applied in Advanced Driver Assistance Systems (ADAS), through forward collision warning to avoid potential collision risks, especially when the driver is distracted, fatigued and drowsy, it can play great effect.
- ADAS Advanced Driver Assistance Systems
- the embodiment of the present disclosure can determine the fitting curve of the lane line by using curve fitting, so that based on the position information of the target object in the target image and each fitting curve, the relationship between the target object and each lane line can be determined. The relationship between them, and then realize the vehicle collision warning on the same lane, with high accuracy.
- the target image in the embodiment of the present disclosure may be an image captured by a camera device currently installed on the vehicle.
- the camera device here can be set facing forward. In this way, the image information of the vehicle in front can be captured at any time during the driving process of the vehicle, thereby realizing forward collision warning.
- Target detection is performed on the acquired target images.
- the target detection described in the embodiments of the present disclosure may include lane line detection on the one hand, and target detection in the lane line on the other hand.
- lane line detection it can be implemented based on semantic segmentation. For example, through the trained semantic segmentation model, each pixel point belonging to the lane line can be determined, and then the detected lane line can be obtained by combining the pixels.
- targets in lane lines it can be the detection of pedestrian targets, the detection of vehicle targets, or the detection of non-motor vehicle targets.
- it can be determined by means of image detection The attribute information of the vehicle, and then determine the position of the vehicle in the target image. Another example is to directly identify the target vehicle from the target image through the trained vehicle detection model. Next, the target vehicle is used as an example for illustration.
- curve fitting can be performed on the lane lines, and then a fitting curve representing the lane lines can be obtained.
- curve fitting can be implemented based on the position information of each pixel point on the lane line.
- the process of realizing curve fitting here can be the process of solving the equation parameters of the constructed fitting curve equation.
- the fitting curve represented by the fitting curve equation is also That's it.
- the relationship between the target object and each lane line can be determined, and then the forward vehicle collision warning in the same lane line can be realized.
- the embodiments of the present disclosure can realize the distinguishing detection of different lane lines, specifically through the following steps:
- Step 1 Perform semantic segmentation on the target image based on the trained first semantic segmentation model, and determine multiple lane line pixel points corresponding to the same lane line semantic label; wherein, the lane line semantic labels of different lane lines are different;
- Step 2 Determine multiple lane line pixel points corresponding to the same lane line semantic label as each pixel point belonging to the same lane line.
- different lane line semantic labels can be set for different lane lines, for example, label 1 can be set for the left lane line, and label 2 can be set for the right lane line, and then it can be realized Detection of different lane lines.
- the above-mentioned first semantic segmentation model can be based on pixel-level semantic annotation, so that when the target image is semantically segmented using the first semantic segmentation model, multiple lane line pixel points corresponding to the same lane line semantic label can be determined , combine multiple lane line pixels corresponding to the same lane line semantic label to obtain a detected lane line, for example, combine multiple lane line pixel points corresponding to label 1 to detect the left lane line .
- the embodiments of the present disclosure may perform unified detection of lane markings first, and then realize differentiated detection of different lane markings based on clustering. Specifically, it can be achieved through the following steps:
- Step 1 Carry out semantic segmentation to the target image based on the trained second semantic segmentation model, and determine a plurality of lane line pixel points corresponding to the lane line semantic label; wherein, each lane line has the same lane line semantic label;
- Step 2 from a plurality of lane line pixel points, randomly select a preset number of lane line pixel points as the initial clustering centers of the lane lines;
- Step 3 Determine the distance between multiple lane line pixel points and each initial clustering center, and divide the multiple lane line pixel points into the lane line where the cluster center with the smallest distance is located;
- Step 4 Determine the new cluster center corresponding to each lane line, and based on the new cluster center, return to the step of dividing multiple lane line pixel points into the lane line where the cluster center with the smallest distance is located, until Satisfy the partition convergence condition, and obtain each pixel point belonging to each lane line.
- the same lane line semantic label can be set for different lane lines, for example, label 1 can be set for both the left lane line and the right lane line.
- the above-mentioned second semantic segmentation model may also be based on pixel-level semantic annotation, so that when the target image is semantically segmented using the second semantic segmentation model, a plurality of lane line pixel points corresponding to the lane line semantic labels may be determined, That is, the pixel point pointing to the lane line can be found from the target image.
- the division of lane lines can be realized based on pixel point clustering.
- the initial clustering center of the lane line can be selected, and then the distance between multiple lane line pixel points and each initial clustering center can be determined, and an aggregation of the lane line is performed based on the minimum distance.
- a new cluster center can be determined, and then the next aggregation is performed based on the minimum distance, and so on, until the divided lane lines are obtained.
- a clustering algorithm such as mean-shift can be used to realize the above clustering process.
- the division convergence condition here may be that the number of clustering times reaches a preset number, for example, 15 times, or that the cluster center does not change or changes little, or other conditions, which are not specifically limited here.
- step S103 may include the following steps S1031 to S1032.
- Step S1031 for the pixel points included in a lane line, construct the longitudinal position variable of the pixel points included in the lane line in the target image as an independent variable, and take the horizontal position variable of the pixel points included in the lane line in the target image as The fitted curve equation of the dependent variable;
- Step S1032 select at least some of the pixels from the pixels included in the lane line, and determine the value of the equation parameter in the constructed fitting curve equation based on the longitudinal and lateral positions of the selected pixels in the target image, using Fitting Curve Equation Containing Equation Parameter Values A fitting curve representing the position of the lane line in the target image.
- the fitting curve equation can be constructed in advance, and then the equation parameter values of the fitting curve equation can be solved by using known data.
- the obtained equation parameter value can make the corresponding fitting curve equation characterize the fitting curve.
- the fitting curve equation here may represent the correspondence between the longitudinal position variable of the pixel points included in the lane line in the target image and the horizontal position variable of the pixel points included in the lane line in the target image. For example, it can be constructed according to the following equation:
- ⁇ a, b, c, d ⁇ can represent the equation parameters of the fitting curve equation
- y can represent the independent variable of the fitting curve equation
- x can represent the dependent variable of the fitting curve equation
- the longitudinal and lateral positions of some pixels selected from each pixel of the lane line in the target image are substituted into the above equation as the observation data of the fitting curve equation, and ⁇ a, b, c, d ⁇ , that is, the equation parameter values of the fitting curve equation can be determined, and then the fitting curve equation with the equation parameter values can be obtained, and the fitting curve equation can represent the corresponding fitting curve.
- the process of solving the equation parameter value may be a process of solving the minimum value for the constructed objective function including the equation parameter of the fitting curve equation, which is specifically implemented through the following steps:
- Step 1 Determine the output result of the fitting curve equation based on the constructed fitting curve equation and the longitudinal position of the selected pixel point in the target image;
- Step 2 based on the output result of the fitting curve equation and the horizontal position of the selected pixel point in the target image, determine the objective function including the equation parameters of the fitting curve equation;
- Step 3 determining the equation parameter values of the fitting curve equation under the condition that the value of the objective function is minimum.
- the vertical position of the selected pixel in the target image can be substituted into the constructed fitting curve equation, and the output result points to the output horizontal position of the pixel in the target image.
- the horizontal position of the pixel in the target image that is, the real horizontal position
- the equation parameter value of the fitting curve equation is obtained by calculating the minimum value of the objective function, and the equation parameter value may be a parameter value pointing to the minimum lateral position difference.
- the embodiment of the present disclosure provides a scheme for screening pixels of lane lines, specifically through follow these steps to achieve:
- Step 1 for each pixel point in each pixel point of the lane line, obtain the semantic score of the pixel point belonging to the semantic label of the lane line;
- Step 2 Rank each pixel in order of semantic score from high to low to obtain a ranking result, and select some pixels according to the ranking result.
- pixels with relatively high semantic scores often point to high-quality pixels such as centered lane lines and high-resolution pixels. Therefore, here, pixels can be screened based on the ranking results of semantic scores, and then ensure that When the fitted curve is complete enough (that is, when the number of selected pixels is sufficient), the calculation cost of fitting can also be reduced. At the same time, the filtered semantic score can be more accurate than the front pixel points. Represents lane lines.
- the vertical and horizontal positions of the pixels in the target image can be converted to the perspective of the bird's-eye view. Specifically include the following steps:
- Step 1 Based on the first conversion relationship between the image coordinate system where the target image is located and the world coordinate system, and the second conversion relationship between the image coordinate system where the bird's-eye view image is located and the world coordinate system, project the selected pixels to the bird's-eye view In the image coordinate system where the image is located, the vertical position and horizontal position of the pixel point in the bird's-eye view are obtained;
- Step 2 Determine the equation parameter values of the constructed fitting curve equation based on the vertical and horizontal positions of the selected pixels in the bird's-eye view.
- the collision warning method in the case of fitting the fitting curves of each lane line according to the above method, it may first be based on the detected position information of the target object in the target image, and each lane line The fitting curve to determine whether the target object is in the lane where the current vehicle is located. Once it is determined that the target object is in the same lane as the current vehicle, it can be based on the fitting curve of the lane line of the lane and the position of the target object in the target image information, a collision warning is issued. Based on the target object in the same lane as the current vehicle, the collision warning is performed to avoid false detection in different lanes and improve the accuracy of collision warning.
- Step 1 Based on the longitudinal position of the target object in the target image and the fitting curve of the two lane lines of the lane where the current vehicle is located, determine the lateral position corresponding to the longitudinal position on the fitting curve of the two lane lines of the lane where the current vehicle is located ;
- Step 2 If the horizontal position of the target object in the target image is between the two horizontal positions corresponding to the two lane lines, determine that the target object is in the lane where the current vehicle is located.
- the lateral position on the fitting curve of the two lane lines corresponding to the longitudinal position can be determined.
- the longitudinal position can be substituted into the fitting curve equation of the two lane lines to obtain the transverse position of the two lane lines at y, x l and x r . If the horizontal position x of the target object in the target image is between the two horizontal positions corresponding to the two lane lines, that is, x l ⁇ x ⁇ x r , then the target object is located in the lane where the current vehicle is located, otherwise the target object Located in other lanes.
- the relative positional relationship between the target object and the two lane lines can be determined , so as to realize the collision warning under the curve.
- the collision warning for the target object can be realized. Specifically, it can be achieved through the following steps:
- Step 1 Based on the position information of the target object in the target image, determine the target curve segment between the current vehicle and the target object in the fitting curve of the two lane lines of the lane;
- Step 2 Calculate the actual driving distance between the target object and the current vehicle based on the target curve segments respectively corresponding to the two lane lines;
- Step 3 Based on the actual driving distance, determine the expected collision duration between the target object and the current vehicle;
- a target curve segment between the current vehicle and the target vehicle can be determined from the fitting curves of the two lane lines of the lane where the current vehicle is located, as shown in FIG. 2 as an example target curve segment. Based on this target curve segment, the actual driving distance between the two vehicles can be determined, and then combined with the current driving speed of the vehicle, the estimated collision duration can be determined. In this embodiment, considering the possibility of curves in the actual lane, the actual driving distance between the two vehicles can be determined based on the target curve segment, and then the estimated collision duration can be determined, which is more in line with the actual application scenario.
- the shorter one of the two fitting curves can be selected to further ensure the timeliness of the collision warning.
- a collision warning message may be issued.
- the collision warning information here may be realized by means of blinking indicator lights, or may be realized by voice, which is not specifically limited in this embodiment of the present disclosure.
- the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possible
- the inner logic is OK.
- the embodiment of the present disclosure also provides a collision warning device corresponding to the collision warning method. Since the problem-solving principle of the device in the embodiment of the present disclosure is similar to the above-mentioned collision warning method in the embodiment of the present disclosure, therefore For the implementation of the device, reference may be made to the implementation of the method, and repeated descriptions will not be repeated.
- FIG. 3 it is a schematic diagram of a collision warning device provided by an embodiment of the present disclosure.
- the device includes: an acquisition module 301 , a detection module 302 , a fitting module 303 and an early warning module 304 ; wherein,
- An acquisition module 301 configured to acquire the target image collected by the camera device arranged on the vehicle;
- the fitting module 303 is used to perform curve fitting on the lane lines in the target image based on the detected position information of each pixel point belonging to the lane line, so as to obtain a fitting representing the position of each lane line in the target image curve;
- the warning module 304 is configured to issue a collision warning to the vehicle based on the detected position information of the target object in the target image and the fitting curve of each lane line.
- the warning module 304 is configured to issue a collision warning to the vehicle based on the detected position information of the target object in the target image and the fitting curve of each lane line according to the following steps:
- a collision warning is issued to the vehicle based on the fitting curve of the lane line of the lane and the position information of the target object in the target image.
- the early warning module 304 is configured to determine whether the target object is in the vehicle's current location based on the detected position information of the target object in the target image and the fitting curves of each lane line according to the following steps: In the lane:
- the horizontal position corresponding to the longitudinal position on the fitting curve of the two lane lines of the vehicle's current lane Based on the longitudinal position of the target object in the target image and the fitting curves of the two lane lines of the vehicle's current lane, determine the horizontal position corresponding to the longitudinal position on the fitting curve of the two lane lines of the vehicle's current lane; if the target The horizontal position of the object in the target image is located between the two horizontal positions corresponding to the two lane lines, and it is determined that the target object is in the lane where the vehicle is currently located.
- the warning module 304 is configured to issue a collision warning to the vehicle based on the fitting curve of the lane line of the lane and the position information of the target object in the target image according to the following steps:
- the actual driving distance between the target object and the vehicle is calculated
- the detection module 302 is configured to determine each pixel point belonging to the lane line according to the following steps:
- Semantic segmentation is performed on the target image based on the trained first semantic segmentation model, and multiple lane line pixel points corresponding to the same lane line semantic label are determined; wherein, the lane line semantic labels of different lane lines are different;
- the detection module 302 determines each pixel point belonging to the lane line according to the following steps:
- the fitting module 303 is configured to perform curve fitting on the lane lines in the target image based on the detected position information of each pixel point belonging to the lane line to obtain the Fitting curves at locations in the target image:
- the longitudinal position variable of the pixel points included in the lane line in the target image is used as an independent variable, and the horizontal position variable of the pixel points included in the lane line in the target image is The fitted curve equation of the dependent variable;
- the fitting curve equation of the parameter value represents the fitting curve of the position of the lane line in the target image.
- the fitting module 303 is configured to determine the values of the equation parameters in the constructed fitting curve equation based on the vertical and horizontal positions of the selected pixels in the target image according to the following steps:
- the fitting module 303 is configured to select some pixel points from the pixel points included in the lane line according to the following steps:
- the fitting module 303 is configured to determine the values of the equation parameters in the constructed fitting curve equation based on the vertical and horizontal positions of the selected pixels in the target image according to the following steps:
- the selected pixels are projected to the image of the bird's-eye view Coordinate system to obtain the vertical position and horizontal position of the pixel in the bird's-eye view;
- FIG. 4 is a schematic structural diagram of the electronic device provided by the embodiment of the present disclosure, including: a processor 401 , a memory 402 , and a bus 403 .
- the memory 402 stores machine-readable instructions executable by the processor 401 (for example, the execution instructions corresponding to the acquisition module 301, the detection module 302, the fitting module 303, and the early warning module 304 in the device in FIG. , the processor 401 communicates with the memory 402 through the bus 403, and when the machine-readable instructions are executed by the processor 401, the following processing is performed:
- a collision warning is issued.
- Embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the collision warning method described in the foregoing method embodiments are executed.
- the storage medium may be a volatile or non-volatile computer-readable storage medium.
- An embodiment of the present disclosure also provides a computer program product, the computer program product carries a program code, and the instructions included in the program code can be used to execute the steps of the collision warning method described in the above method embodiment, for details, please refer to the above The method embodiment will not be repeated here.
- the above-mentioned computer program product may be specifically implemented by means of hardware, software or a combination thereof.
- the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. wait.
- a software development kit Software Development Kit, SDK
- the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
- the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
- the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Traffic Control Systems (AREA)
Abstract
本公开提供了一种碰撞预警的方法、装置、电子设备及存储介质,其中,该方法包括:获取车辆上设置的摄像装置采集的目标图像;对目标图像进行目标检测;基于检测到的属于车道线上的各个像素点的位置信息,对目标图像中的车道线进行曲线拟合,得到表征各条车道线在目标图像中的位置的拟合曲线;基于检测到的目标对象在目标图像的位置信息,以及各条车道线的拟合曲线,向车辆发出碰撞预警。本公开实施例利用曲线拟合可以确定车道线的拟合曲线,这样,基于目标对象在目标图像中的位置信息以及各条拟合曲线可以确定目标对象与各车道线之间的关系,进而实现在同一车道层面上的车辆碰撞预警,准确度较高。
Description
相关申请的交叉引用
本申请要求在2021年8月23日提交至中国专利局、申请号为CN202110970366.1的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
本公开涉及图像处理技术领域,具体而言,涉及一种碰撞预警的方法、装置、电子设备及存储介质。
车载的高级辅助驾驶系统(Advanced Driver Assistance Systems,ADAS)中,有一项重要的预警功能是前向碰撞预警。前向碰撞预警功能通过感知前方车辆和计算碰撞时间,来避免潜在的碰撞风险。前向碰撞预警能够提升行车安全,尤其是在驾驶员出现分心、疲劳犯困时,能够发挥极大的作用。
相关的车辆前向碰撞预警方法主要基于车辆的相对位置和速度关系计算相对碰撞时间,从而确定是否发出报警。然而,对于复杂或特殊的行车道上的前方车辆往往会出现漏报、误报的情况,例如,对于较窄的行车道,容易将隔壁对向车道的车判断为本车道的车,从而发生误报。
发明内容
本公开实施例至少提供一种碰撞预警的方法、装置、电子设备及存储介质。
第一方面,本公开实施例提供了一种碰撞预警的方法,所述方法包括:获取车辆上设置的摄像装置采集的目标图像;对所述目标图像进行目标检测;基于检测到的属于车道线上的各个像素点的位置信息,对所述目标图像中的车道线进行曲线拟合,得到表征各条车道线在所述目标图像中的位置的拟合曲线;基于检测到的目标对象在所述目标图像中的位置信息,以及各条所述车道线的拟合曲线,向所述车辆发出碰撞预警。
第二方面,本公开实施例还提供了一种碰撞预警的装置,所述装置包括:获取模块,用于获取车辆上设置的摄像装置采集的目标图像;检测模块,用于对所述目标图像进行 目标检测;拟合模块,用于基于检测到的属于车道线上的各个像素点的位置信息,对所述目标图像中的车道线进行曲线拟合,得到表征各条车道线在所述目标图像中的位置的拟合曲线;预警模块,用于基于检测到的目标对象在所述目标图像的位置信息,以及各条所述车道线的拟合曲线,向所述车辆发出碰撞预警。
第三方面,本公开实施例还提供了一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如第一方面及其各种实施方式任一所述的碰撞预警的方法的步骤。
第四方面,本公开实施例还提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如第一方面及其各种实施方式任一所述的碰撞预警的方法的步骤。
关于上述碰撞预警的装置、电子设备、及计算机可读存储介质的效果描述参见上述碰撞预警的方法的说明,这里不再赘述。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开实施例所提供的一种碰撞预警的方法的流程图;
图2示出了本公开实施例所提供的一种碰撞预警的方法的应用示意图;
图3示出了本公开实施例所提供的一种碰撞预警的装置的示意图;
图4示出了本公开实施例所提供的一种电子设备的示意图。
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
本文中术语“和/或”,仅仅是描述一种关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
经研究发现,相关的车辆前向碰撞预警方法主要基于车辆的相对位置和速度关系计算相对碰撞时间,从而确定是否发出报警。然而,对于复杂或特殊的行车道上的前方车辆往往会出现漏报、误报的情况,例如,对于较窄的行车道,容易将隔壁对向车道的车辆判断为本车道的车辆,从而发生误报。
基于上述研究,本公开提供了一种碰撞预警的方法、装置、电子设备及存储介质,检测的准确度较高。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种碰撞预警的方法进行详细介绍,本公开实施例所提供的碰撞预警的方法的执行主体一般为具有一定计算能力的计算机设备,该计算机设备例如包括:终端设备或服务器或其它处理设备,终端设备可以为用户设备(User Equipment,UE)、移动设备、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该碰撞预警的方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
参见图1所示,为本公开实施例提供的碰撞预警的方法的流程图,方法包括步骤 S101~S104,其中:
S101:获取车辆上设置的摄像装置采集的目标图像;
S102:对目标图像进行目标检测;
S103:基于检测到的属于车道线上的各个像素点的位置信息,对目标图像中的车道线进行曲线拟合,得到表征各条车道线在目标图像中的位置的拟合曲线;
S104:基于检测到的目标对象在目标图像的位置信息,以及各条车道线的拟合曲线,发出碰撞预警。
为了便于理解本公开实施例提供的碰撞预警的方法,接下来首先对该方法的应用场景进行简单说明。上述碰撞预警的方法主要可以应用于高级辅助驾驶系统(Advanced Driver Assistance Systems,ADAS)中,通过前向碰撞预警来避免潜在的碰撞风险,特别是在驾驶员出现分心、疲劳犯困时,能够发挥极大的作用。
相关技术中提供的基于车辆的相对位置和速度关系计算相对碰撞时间的碰撞预警方案中,对于复杂或特殊的行车道上的前方车辆往往会出现漏报、误报的情况,例如,对于较大的弯道,或者其他的非直线车道线的场景而言,根据车辆的相对位置判断出的前车并不是本车行驶路径上真正的前车,会出现漏报、误报;再如,对于较窄的行车道,在道路过窄时,容易将隔壁对向车道的车判断为本车道的车,从而发生误报。
正是为了解决上述问题,本公开实施例利用曲线拟合可以确定车道线的拟合曲线,这样,基于目标对象在目标图像中的位置信息以及各条拟合曲线可以确定目标对象与各车道线之间的关系,进而实现在同一车道上的车辆碰撞预警,准确度较高。
其中,本公开实施例中的目标图像可以是当前车辆上设置的摄像装置采集的图像。为了更好的实现前向碰撞预警,这里的摄像装置可以是面向前方设置的。这样,在车辆行进的过程中,可以随时捕捉前方车辆的图像信息,进而实现前向碰撞预警。
对于获取到的目标图像进行目标检测,本公开实施例中所述的目标检测一方面可以包括车道线检测,另一方面可以包括车道线中目标的检测。针对车道线检测而言,这里可以是基于语义分割实现的,例如,通过训练好的语义分割模型可以确定属于车道线的各个像素点进而通过像素点组合来得到检测出的车道线。针对车道线中目标的检测而言,这里可以是针对行人目标对象的检测,还可以是针对车辆目标对象的检测,还可以是针对非机动车目标对象的检测,例如,通过图像检测的方式确定车辆的属性信息,进而确定车辆在目标图像中的位置,再如,通过训练好的车辆检测模型直接从目标图像中识别 出目标车辆,接下来以目标车辆作为目标对象进行示例说明。
这里,在进行车辆碰撞预警之前,可以对车道线进行曲线拟合,进而得到表征车道线的拟合曲线。
其中,本公开实施例中可以基于属于车道线上的各个像素点的位置信息实现曲线拟合。这里实现曲线拟合的过程可以是求解构造的拟合曲线方程的方程参数的过程,在求解出拟合曲线方程的方程参数值的情况下,这一拟合曲线方程所表征的拟合曲线也就确定了。
利用拟合曲线以及目标对象在目标图像的位置信息,可以确定目标对象与各车道线之间的关系,进而可以实现同一车道线内的前向车辆碰撞预警。
考虑到车道线检测对于后续实现车道线的曲线拟合的关键作用,接下来可以分以下两个方面分别阐述检测车道线的过程。
第一方面:本公开实施例可以实现不同车道线的区分检测,具体可以通过如下步骤来实现:
步骤一、基于训练好的第一语义分割模型对目标图像进行语义分割,确定对应相同的车道线语义标签的多个车道线像素点;其中,不同车道线的车道线语义标签不同;
步骤二、将对应相同的车道线语义标签的多个车道线像素点,确定为属于同一条车道线上的各个像素点。
在训练第一语义分割模型的过程中,可以针对不同的车道线设置不同的车道线语义标签,例如,针对左车道线可以设置为标签1,针对右车道线可以设置为标签2,进而可以实现不同车道线的检测。
其中,上述第一语义分割模型可以是基于像素级别的语义标注,这样,在利用第一语义分割模型对目标图像进行语义分割时,可以确定对应相同的车道线语义标签的多个车道线像素点,将对应相同的车道线语义标签的多个车道线像素点进行组合,得到检测出的一条车道线,例如,将对应属于标签1的多个车道线像素点进行组合,可以检测出左车道线。
第二方面:本公开实施例可以先进行车道线的统一检测,再基于聚类实现不同车道线的区分检测。具体可以通过如下步骤来实现:
步骤一、基于训练好的第二语义分割模型对目标图像进行语义分割,确定对应车道 线语义标签的多个车道线像素点;其中,各条车道线具有相同的车道线语义标签;
步骤二、从多个车道线像素点中,随机选取预设数量个车道线像素点分别作为车道线的初始聚类中心;
步骤三、确定多个车道线像素点分别与每个初始聚类中心之间的距离,并将多个车道线像素点分别划分至距离最小的聚类中心所在的车道线;
步骤四、确定每条车道线对应的新的聚类中心,并基于新的聚类中心,返回执行将多个车道线像素点分别划分至距离最小的聚类中心所在的车道线的步骤,直到满足划分收敛条件,得到属于每条车道线上的各个像素点。
这里,在训练第二语义分割模型的过程中,可以针对不同的车道线设置相同的车道线语义标签,例如,针对左车道线和右车道线均可以设置为标签1。
其中,上述第二语义分割模型也可以是基于像素级别的语义标注,这样,在利用第二语义分割模型对目标图像进行语义分割时,可以确定对应车道线语义标签的多个车道线像素点,也即可以从目标图像中查找出指向车道线的像素点。
这里,可以基于像素点聚类实现车道线的划分。首先可以选取车道线的初始聚类中心,然后可以确定多个车道线像素点分别与每个初始聚类中心之间的距离,并基于距离最小进行车道线的一次聚合。基于当前次聚合的各个车道线像素点可以确定新的聚类中心,进而基于距离最小进行下一次聚合,以此类推,直至得到划分后的各条车道线。
在具体应用中,可以采用mean-shift等聚类算法来实现上述聚类过程。这里的划分收敛条件可以是聚类次数达到预设次数,例如15次,还可以是聚类中心不再发生变化或变化很小,还可以是其它条件,这里不做具体的限制。
在本公开实施例提供的碰撞预警的方法中,步骤S103可包括如下步骤S1031至S1032。
步骤S1031、针对一条车道线包括的像素点,构造以车道线所包括的像素点在目标图像中的纵向位置变量为自变量、以车道线所包括的像素点在目标图像中的横向位置变量为因变量的拟合曲线方程;
步骤S1032、从车道线所包括的像素点中选取至少部分像素点,并基于选取出的像素点在目标图像中的纵向位置和横向位置,确定构造的拟合曲线方程中的方程参数值,采用包含方程参数值的拟合曲线方程表征车道线在目标图像中的位置的拟合曲线。
考虑到拟合曲线与拟合曲线方程之间的对应关系,这里可以预先构造拟合曲线方程,进而通过已知数据来求解拟合曲线方程的方程参数值。通过求解出的方程参数值可以使得对应的拟合曲线方程来表征拟合曲线。
这里的拟合曲线方程表征的可以是车道线所包括的像素点在目标图像中的纵向位置变量与车道线所包括的像素点在目标图像中的横向位置变量之间的对应关系。例如,可以按照以下方程进行构造:
x=ay
3+by
2+cy+d
其中,{a,b,c,d}可以表征的是拟合曲线方程的方程参数,y可以表征的是拟合曲线方程的自变量,x可以表征的是拟合曲线方程的因变量。
这里,将从车道线的各个像素点中选取的部分像素点在目标图像中的纵向位置和横向位置作为拟合曲线方程的观测数据代入至上述方程,可以求解出{a,b,c,d}的值,也即可以确定出拟合曲线方程的方程参数值,进而得到带有方程参数值的拟合曲线方程,该拟合曲线方程可以表示对应的拟合曲线。
本公开实施例中,求解方程参数值的过程可以是针对构造出的包含拟合曲线方程的方程参数的目标函数求解最小值的过程,具体通过如下步骤来实现:
步骤一、基于构造的拟合曲线方程、以及选取出的像素点在目标图像中的纵向位置,确定拟合曲线方程的输出结果;
步骤二、基于拟合曲线方程的输出结果以及选取出的像素点在目标图像中的横向位置,确定包含拟合曲线方程的方程参数的目标函数;
步骤三、确定目标函数的值最小的情况下,拟合曲线方程的方程参数值。
这里,可以将选取出的像素点在目标图像中的纵向位置代入至构造出的拟合曲线方程中,得到的输出结果指向的是像素点在目标图像中的输出横向位置,这里,结合选取出的像素点在目标图像中的横向位置(即真实横向位置),确定两个横向位置之间的差值,差值越小说明拟合效果越佳,在一个例子中,可以将两个横向位置之间的差值确定为目标函数的值。
本公开实施例中,通过求取目标函数的最小值,得到拟合曲线方程的方程参数值,该方程参数值可以是指向最小横向位置差的参数值。
考虑到车道线所对应像素点的数量巨大,直接参与到拟合运算过程将耗费大量的 计算资源,因而,本公开实施例提供了一种针对车道线的像素点进行筛选的方案,具体可以通过如下步骤来实现:
步骤一、针对车道线的各个像素点中的每个像素点,获取该像素点属于车道线语义标签的语义得分;
步骤二、按照语义得分由高至低的顺序对各个像素点进行排名,得到排名结果,根据排名结果选取部分像素点。
这里,对于语义得分比较高的像素点往往指向的是车道线居中、像素点的清晰度高等高质量的像素点,因而,这里,可以基于语义得分的排名结果进行像素点的筛选,进而在确保拟合的曲线足够完整的情况下(即选取的像素点的数量足够的情况下),还可以降低拟合的计算成本,同时,筛选出的语义得分比较靠前的像素点可以更为准确的表征车道线。
这里,为了消除视角的影响,在进行曲线拟合之前,可以先将像素点在目标图像中的纵向位置和横向位置转换到鸟瞰图视角下。具体包括如下步骤:
步骤一、基于目标图像所在图像坐标系与世界坐标系之间的第一转换关系,以及鸟瞰图所在图像坐标系与世界坐标系之间的第二转换关系,将选取出的像素点投射到鸟瞰图所在图像坐标系,得到该像素点在鸟瞰图中的纵向位置和横向位置;
步骤二、基于选取出的像素点在鸟瞰图中的纵向位置和横向位置,确定构造的拟合曲线方程的方程参数值。
在进行投射之后,原本会在二维的目标图像地平线上汇聚的直行车道线会变成竖直且平行的,尽管弯道不会是竖直的,但在鸟瞰图下也会变得容易拟合。
本公开实施例提供的碰撞预警的方法,在按照上述方法拟合出各条车道线的拟合曲线的情况下,可以首先基于检测到的目标对象在目标图像的位置信息,以及各条车道线的拟合曲线,确定目标对象是否处于当前车辆所在的车道中,一旦确定出目标对象与当前车辆同处一个车道,可以基于该车道的车道线的拟合曲线,以及目标对象在目标图像的位置信息,发出碰撞预警。基于与当前车辆处于同一车道的目标对象进行碰撞预警,避免不同车道所存在的误检情况,提升碰撞预警的准确度。
本公开实施例可以按照如下步骤确定目标对象是否处于当前车辆所在的车道中:
步骤一、基于目标对象在目标图像的纵向位置,以及当前车辆所在车道的两条车 道线的拟合曲线,确定当前车辆所在车道的两条车道线的拟合曲线上与纵向位置对应的横向位置;
步骤二、若目标对象在目标图像中的横向位置,位于两条车道线所分别对应的两个横向位置之间,确定目标对象处于当前车辆所在的车道中。
这里,首先可以基于目标对象在目标图像中的纵向位置,以及当前车辆所在车道的两条车道线的拟合曲线,确定与纵向位置对应的两条车道线的拟合曲线上的横向位置。本公开实施例中,可以将纵向位置代入到两条车道线的拟合曲线方程中,得到两条车道线在y处的横向位置,x
l和x
r。如果目标对象在目标图像中的横向位置x位于两条车道线所分别对应的两个横向位置之间,也即,x
l<x<x
r,则目标对象位于当前车辆所在车道,否则目标对象位于其他车道。本公开实施例中,基于目标对象在所述目标图像中的横向位置以及与车道线对应的两个横向位置之间的相对位置关系可以确定出目标对象与两条车道线之间的相对位置关系,从而实现弯道下的碰撞预警。
本公开实施例中,对于位于当前车辆所在车道的目标对象而言,可以实现针对目标对象的碰撞预警。具体可以通过如下步骤来实现:
步骤一、基于目标对象在目标图像的位置信息,确定该车道的两条车道线的拟合曲线中位于当前车辆与目标对象之间的目标曲线段;
步骤二、基于两条车道线分别对应的目标曲线段,计算目标对象与当前车辆之间的实际行驶距离;
步骤三、基于实际行驶距离,确定目标对象与当前车辆之间的预计碰撞时长;
在预计碰撞时长位于目标时长之内的情况下,发出碰撞预警。
这里,首先可以从当前车辆所在车道的两条车道线的拟合曲线中确定位于当前车辆与目标车辆之间的目标曲线段,如图2所示为示例的一个目标曲线段。基于这一目标曲线段可以确定两车之间的实际行驶距离,进而结合当前车辆的行驶速度可以确定预计碰撞时长。本实施例中,考虑到实际车道中存在弯道的可能性,这里可以基于目标曲线段确定两个车辆之间的实际行驶距离,进而确定出预计碰撞时长,更符合实际的应用场景。
需要说明的是,这里的目标曲线段可以选取两条拟合曲线中较短的一条目标曲线段,更进一步的确保碰撞预警的及时性。
这里,在确定出预计碰撞时长较短,例如小于2分钟的情况下,可以发出碰撞警报信息。这里的碰撞警报信息可以是以指示灯闪烁的方式来实现碰撞警报,还可以是语音的方式来实现,本公开实施例对此不做具体的限制。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
基于同一发明构思,本公开实施例中还提供了与碰撞预警的方法对应的碰撞预警的装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述碰撞预警的方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。
参照图3所示,为本公开实施例提供的一种碰撞预警的装置的示意图,装置包括:获取模块301、检测模块302、拟合模块303和预警模块304;其中,
获取模块301,用于获取车辆上设置的摄像装置采集的目标图像;
检测模块302,用于对目标图像进行目标检测;
拟合模块303,用于基于检测到的属于车道线上的各个像素点的位置信息,对目标图像中的车道线进行曲线拟合,得到表征各条车道线在目标图像中的位置的拟合曲线;
预警模块304,用于基于检测到的目标对象在目标图像的位置信息,以及各条车道线的拟合曲线,向车辆发出碰撞预警。
在一种可能的实施方式中,预警模块304,用于按照以下步骤基于检测到的目标对象在目标图像的位置信息,以及各条车道线的拟合曲线,向车辆发出碰撞预警:
基于检测到的目标对象在目标图像的位置信息,以及各条车道线的拟合曲线,确定目标对象是否处于车辆当前所在的车道中;
在确定目标对象处于车辆当前所在的车道的情况下,基于该车道的车道线的拟合曲线,以及目标对象在目标图像的位置信息,向车辆发出碰撞预警。
在一种可能的实施方式中,预警模块304,用于按照以下步骤基于检测到的目标对象在目标图像的位置信息,以及各条车道线的拟合曲线,确定目标对象是否处于车辆当前所在的车道中:
基于目标对象在目标图像的纵向位置,以及车辆当前所在车道的两条车道线的拟合曲线,确定车辆当前所在车道的两条车道线的拟合曲线上与纵向位置对应的横向位置; 若目标对象在目标图像的横向位置,位于两条车道线所分别对应的两个横向位置之间,确定目标对象处于车辆当前所在的车道中。
在一种可能的实施方式中,预警模块304,用于按照以下步骤基于该车道的车道线的拟合曲线,以及目标对象在目标图像的位置信息,向车辆发出碰撞预警:
基于目标对象在目标图像的位置信息,确定该车道的两条车道线的拟合曲线中位于车辆与目标对象之间的目标曲线段;
基于两条车道线分别对应的目标曲线段,计算目标对象与车辆之间的实际行驶距离;
基于实际行驶距离,确定目标对象与车辆之间的预计碰撞时长;
在预计碰撞时长位于目标时长之内的情况下,发出碰撞预警。
在一种可能的实施方式中,检测模块302,用于按照如下步骤确定属于车道线上的各个像素点:
基于训练好的第一语义分割模型对目标图像进行语义分割,确定对应相同的车道线语义标签的多个车道线像素点;其中,不同车道线的车道线语义标签不同;
将对应相同的车道线语义标签的多个车道线像素点,确定为属于同一条车道线上的各个像素点。
在一种可能的实施方式中,检测模块302,按照如下步骤确定属于车道线上的各个像素点:
基于训练好的第二语义分割模型对目标图像进行语义分割,确定对应车道线语义标签的多个车道线像素点;其中,各条车道线具有相同的车道线语义标签;
从多个车道线像素点中,随机选取预设数量个车道线像素点分别作为初始聚类中心;
确定多个车道线像素点分别与每个初始聚类中心之间的距离,并将多个车道线像素点分别划分至距离最小的聚类中心所在的车道线;
确定每条车道线对应的新的聚类中心,并基于新的聚类中心,返回执行将多个车道线像素点分别划分至距离最小的聚类中心所在的车道线的步骤,直到满足划分收敛条件,得到属于每条车道线上的各个像素点。
在一种可能的实施方式中,拟合模块303,用于基于检测到的属于车道线上的各个像素点的位置信息,对目标图像中的车道线进行曲线拟合,得到表征各条车道线在目标图像中的位置的拟合曲线:
针对车道线中的每条车道线,构造以该车道线所包括的像素点在目标图像中的纵向位置变量为自变量、以该车道线所包括的像素点在目标图像中的横向位置变量为因变量的拟合曲线方程;
从该车道线所包括的像素点中选取至少部分像素点,并基于选取出的像素点在目标图像中的纵向位置和横向位置,确定构造的拟合曲线方程中的方程参数值,采用包含方程参数值的拟合曲线方程表征该车道线在目标图像中的位置的拟合曲线。
在一种可能的实施方式中,拟合模块303,用于按照以下步骤基于选取出的像素点在目标图像中的纵向位置和横向位置,确定构造的拟合曲线方程中的方程参数值:
基于构造的拟合曲线方程、以及选取出的像素点在目标图像中的纵向位置,确定拟合曲线方程的输出结果;
基于拟合曲线方程的输出结果以及选取出的像素点在目标图像中的横向位置,确定包含拟合曲线方程的方程参数的目标函数;
确定目标函数的值最小的情况下,拟合曲线方程的方程参数值。
在一种可能的实施方式中,拟合模块303,用于按照以下步骤从该车道线所包括的像素点中选取部分像素点:
针对该车道线包括的像素点中的每个像素点,获取该像素点属于车道线语义标签的语义得分;
按照语义得分由高至低的顺序对各个像素点进行排名,得到排名结果,根据排名结果选取部分像素点。
在一种可能的实施方式中,拟合模块303,用于按照以下步骤基于选取出的像素点在目标图像中的纵向位置和横向位置,确定构造的拟合曲线方程中的方程参数值:
基于目标图像所在图像坐标系与世界坐标系之间的第一转换关系,以及鸟瞰图所在图像坐标系与世界坐标系之间的第二转换关系,将选取出的像素点投射到鸟瞰图所在图像坐标系,得到该像素点在鸟瞰图中的纵向位置和横向位置;
基于选取出的像素点在鸟瞰图中的纵向位置和横向位置,确定构造的拟合曲线方 程的方程参数值。
关于装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。
本公开实施例还提供了一种电子设备,如图4所示,为本公开实施例提供的电子设备结构示意图,包括:处理器401、存储器402、和总线403。存储器402存储有处理器401可执行的机器可读指令(比如,图3中的装置中获取模块301、检测模块302、拟合模块303、预警模块304对应的执行指令等),当电子设备运行时,处理器401与存储器402之间通过总线403通信,机器可读指令被处理器401执行时执行如下处理:
获取当前车辆上设置的摄像装置采集的目标图像;
对目标图像进行目标检测;
基于检测到的属于车道线上的各个像素点的位置信息,对目标图像中的车道线进行曲线拟合,得到表征各条车道线在目标图像中的位置的拟合曲线;
基于检测到的目标对象在目标图像中的位置信息,以及各条车道线的拟合曲线,发出碰撞预警。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的碰撞预警的方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的碰撞预警的方法的步骤,具体可参见上述方法实施例,在此不再赘述。
其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅 为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。
Claims (13)
- 一种碰撞预警的方法,其特征在于,所述方法包括:获取车辆上设置的摄像装置采集的目标图像;对所述目标图像进行目标检测;基于检测到的属于车道线上的各个像素点的位置信息,对所述目标图像中的车道线进行曲线拟合,得到表征各条车道线在所述目标图像中的位置的拟合曲线;基于检测到的目标对象在所述目标图像的位置信息,以及各条所述车道线的拟合曲线,向所述车辆发出碰撞预警。
- 根据权利要求1所述的方法,其特征在于,基于检测到的目标对象在所述目标图像的位置信息,以及各条所述车道线的拟合曲线,向所述车辆发出碰撞预警,包括:基于检测到的目标对象在所述目标图像的位置信息,以及各条所述车道线的拟合曲线,确定所述目标对象是否处于所述车辆当前所在的车道中;在确定所述目标对象处于所述车辆当前所在的车道的情况下,基于该车道的车道线的拟合曲线,以及所述目标对象在所述目标图像的位置信息,向所述车辆发出碰撞预警。
- 根据权利要求2所述的方法,其特征在于,基于检测到的目标对象在所述目标图像的位置信息,以及各条所述车道线的拟合曲线,确定所述目标对象是否处于所述车辆当前所在的车道中,包括:基于所述目标对象在所述目标图像的纵向位置,以及所述车辆当前所在车道的两条车道线的拟合曲线,确定所述车辆当前所在车道的两条车道线的拟合曲线上与所述纵向位置对应的横向位置;若所述目标对象在所述目标图像的横向位置,位于所述两条车道线所分别对应的两个横向位置之间,确定所述目标对象处于所述车辆当前所在的车道中。
- 根据权利要求2或3所述的方法,其特征在于,基于该车道的车道线的拟合曲线,以及所述目标对象在所述目标图像的位置信息,向所述车辆发出碰撞预警,包括:基于所述目标对象在所述目标图像的位置信息,确定该车道的两条车道线的拟合曲线中位于所述车辆与所述目标对象之间的目标曲线段;基于所述两条车道线分别对应的所述目标曲线段,计算所述目标对象与所述车辆之间的实际行驶距离;基于所述实际行驶距离,确定所述目标对象与所述车辆之间的预计碰撞时长;在所述预计碰撞时长位于目标时长之内的情况下,发出碰撞预警。
- 根据权利要求1-4任一所述的方法,其特征在于,所述方法还包括:基于训练好的第一语义分割模型对所述目标图像进行语义分割,确定对应相同的车道线语义标签的多个车道线像素点;其中,不同车道线的车道线语义标签不同;将对应相同的车道线语义标签的多个车道线像素点,确定为属于同一条车道线上的各个像素点。
- 根据权利要求1-4任一所述的方法,其特征在于,所述方法还包括:基于训练好的第二语义分割模型对所述目标图像进行语义分割,确定对应车道线语义标签的多个车道线像素点;其中,各条车道线具有相同的车道线语义标签;从所述多个车道线像素点中,随机选取预设数量个车道线像素点分别作为初始聚类中心;确定所述多个车道线像素点分别与每个初始聚类中心之间的距离,并将所述多个车道线像素点分别划分至距离最小的聚类中心所在的车道线;确定每条车道线对应的新的聚类中心,并基于所述新的聚类中心,返回执行将所述多个车道线像素点分别划分至距离最小的聚类中心所在的车道线的步骤,直到满足划分收敛条件,得到属于每条车道线上的各个像素点。
- 根据权利要求1-6任一所述的方法,其特征在于,基于检测到的属于车道线上的各个像素点的位置信息,对所述目标图像中的车道线进行曲线拟合,得到表征各条车道线在所述目标图像中的位置的拟合曲线,包括:针对所述车道线中的每条车道线,构造以该车道线所包括的像素点在所述目标图像中的纵向位置变量为自变量、以该车道线所包括的像素点在所述目标图像中的横向位置变量为因变量的拟合曲线方程;从该车道线所包括的像素点中选取至少部分像素点,并基于选取出的像素点在所述 目标图像中的纵向位置和横向位置,确定构造的拟合曲线方程中的方程参数值,采用包含所述方程参数值的拟合曲线方程表征该车道线在所述目标图像中的位置的拟合曲线。
- 根据权利要求7所述的方法,其特征在于,基于选取出的像素点在所述目标图像中的纵向位置和横向位置,确定构造的拟合曲线方程中的方程参数值,包括:基于构造的所述拟合曲线方程、以及选取出的像素点在所述目标图像中的纵向位置,确定所述拟合曲线方程的输出结果;基于所述拟合曲线方程的输出结果以及选取出的像素点在所述目标图像中的横向位置,确定包含所述拟合曲线方程的方程参数的目标函数;确定目标函数的值最小的情况下,所述拟合曲线方程的方程参数值。
- 根据权利要求7或8所述的方法,其特征在于,从该车道线所包括的像素点中选取至少部分像素点,包括:针对该车道线包括的像素点中的每个像素点,获取该像素点属于车道线语义标签的语义得分;按照语义得分由高至低的顺序对所述各个像素点进行排名,得到排名结果,根据所述排名结果选取部分像素点。
- 根据权利要求7-9任一所述的方法,其特征在于,基于选取出的像素点在所述目标图像中的纵向位置和横向位置,确定构造的拟合曲线方程中的方程参数值,包括:基于所述目标图像所在图像坐标系与世界坐标系之间的第一转换关系,以及鸟瞰图所在图像坐标系与世界坐标系之间的第二转换关系,将选取出的像素点投射到所述鸟瞰图所在图像坐标系,得到该像素点在所述鸟瞰图中的纵向位置和横向位置;基于选取出的像素点在所述鸟瞰图中的纵向位置和横向位置,确定构造的拟合曲线方程的方程参数值。
- 一种碰撞预警的装置,其特征在于,所述装置包括:获取模块,用于获取车辆上设置的摄像装置采集的目标图像;检测模块,用于对所述目标图像进行目标检测;拟合模块,用于基于检测到的属于车道线上的各个像素点的位置信息,对所述目标图像中的车道线进行曲线拟合,得到表征各条车道线在所述目标图像中的位置的拟合曲线;预警模块,用于基于检测到的目标对象在所述目标图像的位置信息,以及各条所述车道线的拟合曲线,向所述车辆发出碰撞预警。
- 一种电子设备,其特征在于,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至10任一所述的碰撞预警的方法的步骤。
- 一种计算机可读存储介质,其特征在于,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至10任一所述的碰撞预警的方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110970366.1A CN113673438A (zh) | 2021-08-23 | 2021-08-23 | 一种碰撞预警的方法、装置、电子设备及存储介质 |
CN202110970366.1 | 2021-08-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023024516A1 true WO2023024516A1 (zh) | 2023-03-02 |
Family
ID=78545213
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/084366 WO2023024516A1 (zh) | 2021-08-23 | 2022-03-31 | 一种碰撞预警的方法、装置、电子设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113673438A (zh) |
WO (1) | WO2023024516A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113673438A (zh) * | 2021-08-23 | 2021-11-19 | 上海商汤临港智能科技有限公司 | 一种碰撞预警的方法、装置、电子设备及存储介质 |
CN115995163A (zh) * | 2023-03-23 | 2023-04-21 | 江西通慧科技集团股份有限公司 | 一种车辆碰撞预警方法及系统 |
CN116495004A (zh) * | 2023-06-28 | 2023-07-28 | 杭州鸿泉物联网技术股份有限公司 | 车辆环境感知方法、装置、电子设备和存储介质 |
CN116506473A (zh) * | 2023-06-29 | 2023-07-28 | 北京格林威尔科技发展有限公司 | 一种基于智能门锁的预警方法及装置 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115447597A (zh) * | 2021-12-06 | 2022-12-09 | 北京罗克维尔斯科技有限公司 | 道路作业区域预警方法、装置、设备及存储介质 |
TWI831242B (zh) * | 2022-06-15 | 2024-02-01 | 鴻海精密工業股份有限公司 | 車輛碰撞預警方法、系統、汽車及電腦可讀存儲介質 |
CN115601435B (zh) * | 2022-12-14 | 2023-03-14 | 天津所托瑞安汽车科技有限公司 | 车辆姿态检测方法、装置、车辆及存储介质 |
CN115684637B (zh) * | 2022-12-30 | 2023-03-17 | 南京理工大学 | 基于路侧单目相机标定的高速公路车辆测速方法及设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110466516A (zh) * | 2019-07-11 | 2019-11-19 | 北京交通大学 | 一种基于非线性规划的曲线道路自动车换道轨迹规划方法 |
US20190384304A1 (en) * | 2018-06-13 | 2019-12-19 | Nvidia Corporation | Path detection for autonomous machines using deep neural networks |
WO2020182564A1 (de) * | 2019-03-11 | 2020-09-17 | Zf Friedrichshafen Ag | Vision-basiertes lenkungsassistenzsystem für landfahrzeuge |
CN112712040A (zh) * | 2020-12-31 | 2021-04-27 | 潍柴动力股份有限公司 | 基于雷达校准车道线信息的方法、装置、设备及存储介质 |
CN113673438A (zh) * | 2021-08-23 | 2021-11-19 | 上海商汤临港智能科技有限公司 | 一种碰撞预警的方法、装置、电子设备及存储介质 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101352662B1 (ko) * | 2012-12-28 | 2014-01-17 | 주식회사 만도 | 차선 인식을 이용한 패싱 차량 검출 장치 및 방법 |
CN108875603B (zh) * | 2018-05-31 | 2021-06-04 | 上海商汤智能科技有限公司 | 基于车道线的智能驾驶控制方法和装置、电子设备 |
CN109002795B (zh) * | 2018-07-13 | 2021-08-27 | 清华大学 | 车道线检测方法、装置及电子设备 |
CN109147368A (zh) * | 2018-08-22 | 2019-01-04 | 北京市商汤科技开发有限公司 | 基于车道线的智能驾驶控制方法装置与电子设备 |
CN111433780A (zh) * | 2018-11-29 | 2020-07-17 | 深圳市大疆创新科技有限公司 | 车道线检测方法、设备、计算机可读存储介质 |
CN110203210A (zh) * | 2019-06-19 | 2019-09-06 | 厦门金龙联合汽车工业有限公司 | 一种车道偏离预警方法、终端设备及存储介质 |
CN110414386B (zh) * | 2019-07-12 | 2022-01-21 | 武汉理工大学 | 基于改进scnn网络的车道线检测方法 |
CN110781768A (zh) * | 2019-09-30 | 2020-02-11 | 奇点汽车研发中心有限公司 | 目标对象检测方法和装置、电子设备和介质 |
CN113257036A (zh) * | 2020-02-13 | 2021-08-13 | 宁波吉利汽车研究开发有限公司 | 一种车辆碰撞预警方法、装置、设备和存储介质 |
CN111860319B (zh) * | 2020-07-20 | 2024-03-26 | 阿波罗智能技术(北京)有限公司 | 车道线的确定方法、定位精度的评测方法、装置、设备 |
CN112613392B (zh) * | 2020-12-18 | 2024-07-23 | 北京国家新能源汽车技术创新中心有限公司 | 基于语义分割的车道线检测方法、装置、系统及存储介质 |
-
2021
- 2021-08-23 CN CN202110970366.1A patent/CN113673438A/zh active Pending
-
2022
- 2022-03-31 WO PCT/CN2022/084366 patent/WO2023024516A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190384304A1 (en) * | 2018-06-13 | 2019-12-19 | Nvidia Corporation | Path detection for autonomous machines using deep neural networks |
WO2020182564A1 (de) * | 2019-03-11 | 2020-09-17 | Zf Friedrichshafen Ag | Vision-basiertes lenkungsassistenzsystem für landfahrzeuge |
CN110466516A (zh) * | 2019-07-11 | 2019-11-19 | 北京交通大学 | 一种基于非线性规划的曲线道路自动车换道轨迹规划方法 |
CN112712040A (zh) * | 2020-12-31 | 2021-04-27 | 潍柴动力股份有限公司 | 基于雷达校准车道线信息的方法、装置、设备及存储介质 |
CN113673438A (zh) * | 2021-08-23 | 2021-11-19 | 上海商汤临港智能科技有限公司 | 一种碰撞预警的方法、装置、电子设备及存储介质 |
Non-Patent Citations (1)
Title |
---|
SHENG PENG-CHENG; LUO XIN-WEN; LI JING-PU; WU XUE-YI; BIAN XUE-LIANG: "Obstacle avoidance path planning of intelligent electric vehicles in winding road scene", JOURNAL OF TRAFFIC AND TRANSPORTATION ENGINEERING, vol. 20, no. 2, 15 April 2020 (2020-04-15), pages 195 - 204, XP009543845, ISSN: 1671-1637 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113673438A (zh) * | 2021-08-23 | 2021-11-19 | 上海商汤临港智能科技有限公司 | 一种碰撞预警的方法、装置、电子设备及存储介质 |
CN115995163A (zh) * | 2023-03-23 | 2023-04-21 | 江西通慧科技集团股份有限公司 | 一种车辆碰撞预警方法及系统 |
CN116495004A (zh) * | 2023-06-28 | 2023-07-28 | 杭州鸿泉物联网技术股份有限公司 | 车辆环境感知方法、装置、电子设备和存储介质 |
CN116506473A (zh) * | 2023-06-29 | 2023-07-28 | 北京格林威尔科技发展有限公司 | 一种基于智能门锁的预警方法及装置 |
CN116506473B (zh) * | 2023-06-29 | 2023-09-22 | 北京格林威尔科技发展有限公司 | 一种基于智能门锁的预警方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN113673438A (zh) | 2021-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023024516A1 (zh) | 一种碰撞预警的方法、装置、电子设备及存储介质 | |
JP6934574B2 (ja) | 前方衝突制御方法および装置、電子機器、プログラムならびに媒体 | |
US11840239B2 (en) | Multiple exposure event determination | |
CN113439247B (zh) | 自主载具的智能体优先级划分 | |
JP2022516288A (ja) | 階層型機械学習ネットワークアーキテクチャ | |
US20140354684A1 (en) | Symbology system and augmented reality heads up display (hud) for communicating safety information | |
GB2560620A (en) | Recurrent deep convolutional neural network for object detection | |
KR20210038852A (ko) | 조기 경보 방법, 장치, 전자 기기, 컴퓨터 판독 가능 저장 매체 및 컴퓨터 프로그램 | |
JP4670805B2 (ja) | 運転支援装置、及びプログラム | |
CN111595357B (zh) | 可视化界面的显示方法、装置、电子设备和存储介质 | |
JP2020194263A (ja) | 事故分析装置、事故分析方法及びプログラム | |
WO2022161139A1 (zh) | 行驶朝向检测方法、装置、计算机设备及存储介质 | |
JP2021099877A (ja) | 専用車道での走行をリマインダーする方法、装置、機器及び記憶媒体 | |
CN104875740B (zh) | 用于管理跟随空间的方法、主车辆以及跟随空间管理单元 | |
JP2022507128A (ja) | 交差点状態検出方法、装置、電子機器及び車両 | |
CN112257542A (zh) | 障碍物感知方法、存储介质及电子设备 | |
Nieto et al. | On creating vision‐based advanced driver assistance systems | |
CN112735163B (zh) | 确定目标物体静止状态的方法、路侧设备、云控平台 | |
CN110154896B (zh) | 一种检测障碍物的方法以及设备 | |
CN110057377B (zh) | 路径导航方法及相关产品 | |
CN115331482B (zh) | 车辆预警提示方法、装置、基站及存储介质 | |
CN115798260A (zh) | 一种行人和车辆动态预判方法、装置和存储介质 | |
Yang et al. | An Improved Object Detection and Trajectory Prediction Method for Traffic Conflicts Analysis | |
CN112861657A (zh) | 一种无人车经召停靠方法、终端设备及存储介质 | |
Smaldone et al. | Improving bicycle safety through automated real-time vehicle detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22859870 Country of ref document: EP Kind code of ref document: A1 |