CN114782923B - Detection system for dead zone of vehicle - Google Patents

Detection system for dead zone of vehicle Download PDF

Info

Publication number
CN114782923B
CN114782923B CN202210490050.7A CN202210490050A CN114782923B CN 114782923 B CN114782923 B CN 114782923B CN 202210490050 A CN202210490050 A CN 202210490050A CN 114782923 B CN114782923 B CN 114782923B
Authority
CN
China
Prior art keywords
detection
segmentation
branch
blind area
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210490050.7A
Other languages
Chinese (zh)
Other versions
CN114782923A (en
Inventor
陈明木
王汉超
徐绍凯
贾宝芝
何一凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Ruiwei Information Technology Co ltd
Original Assignee
Xiamen Ruiwei Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Ruiwei Information Technology Co ltd filed Critical Xiamen Ruiwei Information Technology Co ltd
Priority to CN202210490050.7A priority Critical patent/CN114782923B/en
Publication of CN114782923A publication Critical patent/CN114782923A/en
Application granted granted Critical
Publication of CN114782923B publication Critical patent/CN114782923B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a detection system for a dead zone of a vehicle, which comprises the following steps: s1, collecting a blind area data set and marking a sample; s2, based on Yolov network, carrying out blind area detection and construction of a segmentation combined network structure, and improving a sample distribution strategy during detection branch training; the blind area detection and segmentation joint network structure comprises two detection branches and one segmentation branch; and S3, determining positive and negative samples in the training process according to the sample distribution strategy, and performing detection and regression combined training to obtain a trained detection model of the vehicle blind area. Based on Yolov frames, the invention customizes a multi-task network structure combining detection and segmentation, and can simultaneously improve the precision of detection and segmentation algorithms in a training stage; the sample distribution strategy is improved, and the detection precision is improved; the detected target external frame is more stable by utilizing the segmentation branch result in the post-processing stage, so that the accurate foot drop point of the obstacle is obtained.

Description

Detection system for dead zone of vehicle
Technical Field
The invention relates to the technical field of vehicle driving obstacle detection, in particular to a vehicle blind zone detection system and an implementation method thereof.
Background
After the 21 st century, the national development is more and more rapid, and the quantity of the social vehicles and the freight vehicles is increased rapidly, so that the traffic accidents are increased, wherein a part of the traffic accidents are caused by the fact that a large visual blind area exists on the left side and the right side in the driving process of part of vehicles such as a muck car and a truck, a driver cannot see the real-time condition of the blind area, and pedestrians, bicycles, electric vehicles and other obstacles in the blind area are easy to collide when the driving state of the vehicles is changed, so that the serious traffic accidents are caused.
The scheme mainly detects blind areas invisible behind the left, the right and the rear of a truck and a muck truck in the driving process, confirms whether the obstacle exists or not and the accurate foot drop point of the obstacle on the ground, and if the obstacle exists and the foot drop point is in a dangerous range, the warning prompt driver needs to be sent out to prompt the driver to carefully run at reduced speed so as to ensure driving safety. Wherein the obstacle mainly comprises pedestrians and riding persons.
In order to monitor the condition of the blind area in real time, many methods are currently used for monitoring directly by using radar or using a vision-based target detection method. The radar-based disadvantage is that it is costly and it is not possible to determine the type of obstacle. The vision-based target detection method can effectively reduce cost, but the existing scheme has two disadvantages, namely, the detection accuracy is low; secondly, the detection frame has larger fluctuation, so that after pedestrians appear in the alarm area, early warning cannot be performed in real time and quickly; thirdly, the foot drop point of the obstacle cannot be accurately positioned, so that the actual position of the obstacle cannot be accurately positioned; fourth, in the blind area monitoring task, the panorama segmentation precision is not as high as that of focusing on the foreground only, and the panorama segmentation labeling cost is relatively high.
Disclosure of Invention
The invention aims to solve the technical problem of providing a detection system for a dead zone of a vehicle, which assists a driver in automatically monitoring and accurately positioning obstacles in the dead zone range in the driving process and can quickly generate alarm response in real time.
The invention provides a detection system of a vehicle blind area, which comprises a detection model of the vehicle blind area, wherein the implementation process of the detection model comprises the following steps:
S1, collecting a blind area data set, and marking samples, namely marking a foreground object bounding box, a foreground object category and a category to which foreground segmentation pixels belong of each picture sample; converting the marked foreground object surrounding frame label into yolo format label;
s2, based on Yolov network, carrying out blind area detection and construction of a segmentation combined network structure, and improving a sample distribution strategy during detection branch training;
The blind area detection and segmentation joint network structure comprises an input layer, a backbone network, a feature pyramid fusion layer, two detection branches and a segmentation branch; the two detection branches are respectively used for the distribution and classification of positive and negative samples and the training of regression, and outputting a foreground target class, and the segmentation branch is used for further accurately obtaining the class of the foreground segmentation pixel from the foreground target class output by the detection branch to the pixel level;
The sample allocation policy is: if the aspect ratio of the detection target is equal to 1, taking an anchor frame of a center point and two anchor frames closest to the center point as positive samples; if the aspect ratio of the detected object is smaller than 1, only the center point and the adjacent point closest to the detected object in the longitudinal direction are taken, if the aspect ratio of the detected object is larger than 1, only the center point and the adjacent point closest to the detected object in the transverse direction are taken as positive samples, and the rest points are taken as negative samples;
And S3, determining positive and negative samples in the training process according to the sample distribution strategy, and performing detection and regression combined training to obtain a trained detection model of the vehicle blind area.
Further, in the step S3, the two detection branches include a sample detection classification branch and a regression branch, the sample detection classification branch is trained by adopting a binary cross entropy loss function, and the regression branch is trained by adopting a CIOU loss function; the segmentation branches are trained by adopting a binary cross entropy loss function;
The background label is set to 0 during training, the foreground object class labels are 1,2 and 3 … n, and n is an integer greater than or equal to 1.
Further, the method is used for executing the following blind area detection process after deployment:
Acquiring a real-time blind area monitoring picture through a camera module;
Reading a detection model file, and inputting a monitoring picture to a calling interface for forward reasoning;
Acquiring a forward reasoning result, including a detection result and a segmentation result; the detection result mainly obtains an obstacle bounding box through decoding; the segmentation result mainly comprises the steps of obtaining a foreground target, and then extracting a minimum bounding box to correct a detection frame result, so that the falling point of an obstacle on the ground is accurately known.
Further, for the dead zone on the right side of the vehicle, after the forward reasoning result is obtained, the method further comprises the following steps:
And judging whether the corrected detection frame belongs to a preset number of alarm areas according to the result of the correction detection frame, and timely giving early warning according to the alarm level corresponding to the judgment result to prompt a driver to drive carefully.
One or more technical solutions provided in the embodiments of the present invention at least have the following technical effects or advantages: the invention realizes the function of simultaneously completing two tasks by one network based on the blind area detection and segmentation joint multitask network structure of Yolov, and the two tasks are mutually promoted by simultaneous learning, thereby improving the detection and segmentation precision together and improving the efficiency; the network segmentation branches only segment the foreground, so that more foreground features can be focused, compared with the existing panoramic segmentation method, the precision is greatly improved, and the labeling cost of segmentation tasks can be effectively reduced; the anchor frame allocation strategy during Yolov detection branch training is improved, so that the detection branch precision is higher; in addition, the detected target circumscribed frame can be more stable by utilizing the segmentation branch result, and the segmentation can be accurately positioned to the landing point of the obstacle.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
The invention will be further described with reference to examples of embodiments with reference to the accompanying drawings.
FIG. 1 is a flow chart of a method of implementing the system of the present invention;
FIG. 2 is a flow chart of collecting a dead zone data set and sample labeling according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a blind area detection and segmentation joint network structure according to an embodiment of the present invention;
FIG. 4 is a diagram of sample allocation policies of the v3 and v4 versions of prior Yolov;
FIG. 5 is a diagram of a sample allocation policy of the v5 version of the prior Yolov;
FIG. 6 is a sample distribution strategy diagram according to an embodiment of the present invention;
fig. 7 is a flowchart of a process for detecting a blind area of a vehicle by the system of the present invention.
Detailed Description
The embodiment of the application provides the detection system for the dead zone of the vehicle, which assists a driver in automatically monitoring and accurately positioning the obstacle in the dead zone range in the driving process and can quickly generate alarm response in real time.
The technical scheme in the embodiment of the application has the following overall thought: aiming at the characteristic that the dead zone dangerous situation needs to be rapidly and timely alarmed, based on Yolov frames, an end-to-end dead zone detection and segmentation combined multi-task network structure is customized, the function that one network simultaneously completes two tasks is realized, and two tasks are simultaneously learned and mutually promoted, so that the detection and segmentation precision is jointly improved, and the efficiency is improved; the network segmentation branches only segment the foreground, so that more foreground features can be focused, compared with the existing panoramic segmentation method, the precision is greatly improved, and the labeling cost of segmentation tasks can be effectively reduced; the anchor frame allocation strategy during Yolov detection branch training is improved, so that the detection branch precision is higher; in addition, the detected target circumscribed frame can be more stable by utilizing the segmentation branch result, and the segmentation can be accurately positioned to the landing point of the obstacle.
The implementation method of the system of the embodiment of the invention mainly comprises three parts:
the first part is used for collecting a dead zone data set and a sample label;
The second part is based on Yolov blind area detection and segmentation combined network structure construction, and improves an anchor frame allocation strategy during detection branch training;
and the third part, blind area detection and segmentation multitasking system deployment and application alarm examples.
The image data and the annotation data of the first part are transmitted to the joint model training of the second part; the second part aims at the visual characteristics of the dead zone, a new joint network structure is established, and the training anchor frame allocation strategy of the detection branch is improved, so that the model obtains higher precision; the third part is an example which is deployed on the mobile equipment by combining with an actual application alarm strategy on the basis of the multitask model trained in the second part, and the whole system can assist a driver in real-time and rapid monitoring of the blind area in the driving process, so that an alarm can be timely generated as long as obstacles exist in the blind area range, and the driver is reminded of taking care of driving.
Example 1
As shown in fig. 1, the present embodiment provides a system for detecting a dead zone of a vehicle, including a detection model of the dead zone of the vehicle, wherein the implementation process of the detection model includes the following steps:
S1, collecting a blind area data set, and marking samples, namely marking a foreground object bounding box, a foreground object category and a category to which foreground segmentation pixels belong of each picture sample; converting the marked foreground object surrounding frame label into yolo format label;
As shown in fig. 2: a camera is arranged at the rear of the right side of the vehicle to monitor a front blind area forwards, then the vehicle is driven to run on the road, the blind area video is recorded, and finally a sample picture is processed; marking a foreground object bounding box and a category of each sample picture by using a marking tool, and marking the category of a foreground segmentation pixel, wherein the category mainly comprises pedestrians and riding persons; the marked surrounding frame label is converted into yolo format labels (x, y, w, h).
S2, based on Yolov network, carrying out blind area detection and construction of a segmentation combined network structure, and improving a sample distribution strategy during detection branch training;
as shown in fig. 3, the blind area detection and segmentation joint network structure comprises an input layer, a backbone network, a feature pyramid fusion layer, two detection branches and one segmentation branch; the two detection branches are respectively used for the distribution and classification of positive and negative samples and the training of regression, and outputting a foreground target class, and the segmentation branch is used for further accurately obtaining the class of the foreground segmentation pixel from the foreground target class output by the detection branch to the pixel level;
As will be appreciated by those skilled in the art, the underlying network of Yolov includes an input layer, a backbone network (CSPNet), and a feature pyramid fusion layer (FPN). According to the embodiment of the invention, according to the characteristic that most of the blind area targets are small and medium targets, only 8 times and 16 times of detection heads are reserved in the detection branches to form two detection branches which are respectively used for the distribution and classification of positive and negative samples and the training of regression, and the number of preset frames Anchor is increased from 3 to 5, so that more labeling frames can be covered as much as possible, and the difficulty of training regression is reduced. It is known that the detection frame predicted by the model is inevitably fluctuated, and the detection frame cannot accurately give out the landing points of the obstacle, which is inconsistent in alarm response required to be fast and accurate for blind area detection. Therefore, the embodiment of the invention adds a division branch on the basis of the original detection network structure, the main purpose of the division branch is to give a front Jing Leibie of a pixel level corresponding to the front Jing Leibie of the detection frame, then the foreground object can be precisely at the pixel level, at the moment, the minimum bounding box can be acquired according to the division foreground object, the correction of the detection frame is achieved, and meanwhile, the divided area can accurately know the landing point of the obstacle on the ground. The two branches are used for learning the human body instead of the segmentation method directly, because the two tasks are mutually promoted by simultaneous learning, thereby improving the detection and segmentation precision together.
The sample allocation policy is: if the aspect ratio of the detection target is equal to 1, taking an anchor frame of a center point and two anchor frames closest to the center point as positive samples; if the aspect ratio of the detection target is smaller than 1, only the center point and the nearest neighbor point in the longitudinal direction are taken, and if the aspect ratio of the detection target is larger than 1, only the center point and the nearest neighbor point in the transverse direction are taken as positive samples, and the rest points are taken as negative samples.
As described above, two subtasks are actually included for the two detection branches, namely, the classification branch for the Anchor Box (Anchor Box) and the regression branch for the Anchor Box (Anchor Box). The objective of the Anchor is to reduce the difficulty of regression, and typically, the detection task sets w×h×n anchors in the network output layer (assuming that the size of the output layer feature map is w×h). In a conventional comparison and typical detection algorithm, for example, FASTERRCNN, the value of N is 9, and what sample (positive and negative sample) the Anchor Box (Anchor Box) belongs to is allocated to, whether the overlap (Intersection overtunon, IOU) of the Anchor Box (Anchor Box) and the real bounding Box (GT BBox) is larger than a preset threshold value is calculated, and here, generally 0.5 is taken, if larger, a positive sample is taken, and if smaller, a negative sample is taken. Since FASTERRCNN is two-stage, the overall frame is slow to run. So the SSD algorithm appears later again, its allocation strategy is the same as FASTERRCNN. However, the whole SSD algorithm is not as compact as Yolo, so the Yolo series appears, and v3, v4 and v5 are popular, wherein the sample distribution strategies of v3 and v4 are the same, and v5 adopts different sample distribution strategies for improving the precision.
As shown in fig. 4, yolov to v4 identify that the Anchor Box (Anchor Box) belongs to the positive sample, and after the center of the real bounding Box (Ground Truth Bounding Box, GT BBox) falls into the feature map grid, the overlapping degree (Intersection Over Union, IOU) of the corresponding Anchor Box (Anchor Box) and the real bounding Box (GT BBox) is greater than a certain threshold, and then the corresponding Anchor Box is determined to be the positive sample. However Yolov is improved on the basis of this allocation strategy, and as shown in fig. 5, not only the Anchor Box (Anchor Box) of the center point is taken as a candidate positive sample, but also the two Anchor boxes (Anchor boxes) closest to the center point are taken as positive samples, so that the number of positive samples of v5 version Yolov is 3-9 times that of v3 and v 4.
As shown in fig. 6, the embodiment of the present invention considers that the mode of distributing samples Yolov should be related to the aspect ratio of the target, so the embodiment of the present invention proposes a sample distribution improved version, when the aspect ratio is equal to 1, the original Yolov distribution mode is adopted, but when the aspect ratio is smaller than 1, only the center point and the nearest neighbor point in the longitudinal direction are selected, and when the aspect ratio is larger than 1, only the center point and the nearest neighbor point in the transverse direction are selected, so that the quality of the positive sample can be ensured to be selected to a great extent.
And S3, determining positive and negative samples in the training process according to the sample distribution strategy, and performing detection and regression combined training to obtain a trained detection model of the vehicle blind area.
Positive and negative samples in the training process are determined by the sample distribution strategy of the detection branches, at the moment, the sample detection classification branches in the two detection branches are trained by adopting a binary cross entropy loss function, the regression branches are trained by adopting a CIOU loss function, and in addition, the segmentation branches are also trained by adopting the binary cross entropy loss function. The background label is set to 0 during training, the foreground object class labels are 1,2 and 3 … n, and n is an integer greater than or equal to 1.
The trained system needs to be deployed for use, and the system of the embodiment of the invention can be deployed to a mobile device platform, and takes an embedded Hai Si (Hisi) device platform as an demonstration platform, and other platforms can also be used. Specifically, during deployment, a quantization tool provided by the joint model on the Hisi platform is required to be quantized to obtain a quantization model file, and a quantized joint model is called to be inferred through a NNIE SDK call interface.
As further shown in fig. 7, the method is configured to perform the following blind zone detection process after deployment:
after initialization, acquiring a real-time blind area monitoring picture through a camera module;
reading a detection model file (an embedded Hai Si can be read by using a NNIE SDK software interface), and inputting a monitoring picture to a calling interface for forward reasoning;
Acquiring a forward reasoning result, including a detection result and a segmentation result; the detection result mainly obtains an obstacle bounding box through decoding; the segmentation result mainly comprises the steps of obtaining a foreground target, and then extracting a minimum bounding box to correct a detection frame result, so that the falling point of an obstacle on the ground is accurately known.
The blind area detection process can be circularly carried out, and for the right blind area of the vehicle, the blind area detection process can further comprise the following steps after the forward reasoning result is obtained:
And judging whether the corrected detection frame belongs to a preset number of alarm areas according to the result of the correction detection frame, and timely giving early warning according to the alarm level corresponding to the judgment result to prompt a driver to drive carefully.
The obtained forward reasoning result comprises a segmentation result, namely the segmentation branch result can be utilized to enable the detected target external frame to be more stable, and the accurate landing point of the obstacle can be obtained.
The technical scheme provided by the embodiment of the invention has at least the following technical effects or advantages: the invention realizes the function of simultaneously completing two tasks by one network based on the blind area detection and segmentation joint multitask network structure of Yolov, and the two tasks are mutually promoted by simultaneous learning, thereby improving the detection and segmentation precision together and improving the efficiency; the network segmentation branches only segment the foreground, so that more foreground features can be focused, compared with the existing panoramic segmentation method, the precision is greatly improved, and the labeling cost of segmentation tasks can be effectively reduced; the anchor frame allocation strategy during Yolov detection branch training is improved, so that the detection branch precision is higher; in addition, the detected target circumscribed frame can be more stable by utilizing the segmentation branch result, and the segmentation can be accurately positioned to the landing point of the obstacle.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that the specific embodiments described are illustrative only and not intended to limit the scope of the invention, and that equivalent modifications and variations of the invention in light of the spirit of the invention will be covered by the claims of the present invention.

Claims (4)

1. A detection system for a dead zone of a vehicle is characterized in that: the implementation process of the detection model comprises the following steps:
S1, collecting a blind area data set, and marking samples, namely marking a foreground object bounding box, a foreground object category and a category to which foreground segmentation pixels belong of each picture sample; converting the marked foreground object surrounding frame label into yolo format label;
s2, based on Yolov network, carrying out blind area detection and construction of a segmentation combined network structure, and improving a sample distribution strategy during detection branch training;
The blind area detection and segmentation joint network structure comprises an input layer, a backbone network, a feature pyramid fusion layer, two detection branches and a segmentation branch; the two detection branches are respectively used for the distribution and classification of positive and negative samples and the training of regression, and outputting a foreground target class, and the segmentation branch is used for further accurately obtaining the class of the foreground segmentation pixel from the foreground target class output by the detection branch to the pixel level;
The sample allocation policy is: if the aspect ratio of the detection target is equal to 1, taking an anchor frame of a center point and two anchor frames closest to the center point as positive samples; if the aspect ratio of the detected object is smaller than 1, only the center point and the adjacent point closest to the detected object in the longitudinal direction are taken, if the aspect ratio of the detected object is larger than 1, only the center point and the adjacent point closest to the detected object in the transverse direction are taken as positive samples, and the rest points are taken as negative samples;
And S3, determining positive and negative samples in the training process according to the sample distribution strategy, and performing detection and regression combined training to obtain a trained detection model of the vehicle blind area.
2. The vehicle blind spot detection system according to claim 1, wherein: in the step S3, the two detection branches include a sample detection classification branch and a regression branch, the sample detection classification branch is trained by adopting a binary cross entropy loss function, and the regression branch is trained by adopting a CIOU loss function; the segmentation branches are trained by adopting a binary cross entropy loss function;
The background label is set to 0 during training, the foreground object class labels are 1,2 and 3 … n, and n is an integer greater than or equal to 1.
3. The vehicle blind spot detection system according to claim 1, wherein: the method is used for executing the following blind area detection process after deployment:
Acquiring a real-time blind area monitoring picture through a camera module;
Reading a detection model file, and inputting a monitoring picture to a calling interface for forward reasoning;
Acquiring a forward reasoning result, including a detection result and a segmentation result; the detection result mainly obtains an obstacle bounding box through decoding; the segmentation result mainly comprises the steps of obtaining a foreground target, and then extracting a minimum bounding box to correct a detection frame result, so that the falling point of an obstacle on the ground is accurately known.
4. A vehicle blind spot detection system according to claim 3, wherein: for the dead zone on the right side of the vehicle, after the forward reasoning result is obtained, the method further comprises the following steps:
And judging whether the corrected detection frame belongs to a preset number of alarm areas according to the result of the correction detection frame, and timely giving early warning according to the alarm level corresponding to the judgment result to prompt a driver to drive carefully.
CN202210490050.7A 2022-05-07 2022-05-07 Detection system for dead zone of vehicle Active CN114782923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210490050.7A CN114782923B (en) 2022-05-07 2022-05-07 Detection system for dead zone of vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210490050.7A CN114782923B (en) 2022-05-07 2022-05-07 Detection system for dead zone of vehicle

Publications (2)

Publication Number Publication Date
CN114782923A CN114782923A (en) 2022-07-22
CN114782923B true CN114782923B (en) 2024-05-03

Family

ID=82435932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210490050.7A Active CN114782923B (en) 2022-05-07 2022-05-07 Detection system for dead zone of vehicle

Country Status (1)

Country Link
CN (1) CN114782923B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135398A (en) * 2019-05-28 2019-08-16 厦门瑞为信息技术有限公司 Both hands off-direction disk detection method based on computer vision
CN110942000A (en) * 2019-11-13 2020-03-31 南京理工大学 Unmanned vehicle target detection method based on deep learning
CN111738056A (en) * 2020-04-27 2020-10-02 浙江万里学院 Heavy truck blind area target detection method based on improved YOLO v3
WO2021004077A1 (en) * 2019-07-09 2021-01-14 华为技术有限公司 Method and apparatus for detecting blind areas of vehicle
CN113156421A (en) * 2021-04-07 2021-07-23 南京邮电大学 Obstacle detection method based on information fusion of millimeter wave radar and camera
CN113657161A (en) * 2021-07-15 2021-11-16 北京中科慧眼科技有限公司 Non-standard small obstacle detection method and device and automatic driving system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135398A (en) * 2019-05-28 2019-08-16 厦门瑞为信息技术有限公司 Both hands off-direction disk detection method based on computer vision
WO2021004077A1 (en) * 2019-07-09 2021-01-14 华为技术有限公司 Method and apparatus for detecting blind areas of vehicle
CN110942000A (en) * 2019-11-13 2020-03-31 南京理工大学 Unmanned vehicle target detection method based on deep learning
CN111738056A (en) * 2020-04-27 2020-10-02 浙江万里学院 Heavy truck blind area target detection method based on improved YOLO v3
CN113156421A (en) * 2021-04-07 2021-07-23 南京邮电大学 Obstacle detection method based on information fusion of millimeter wave radar and camera
CN113657161A (en) * 2021-07-15 2021-11-16 北京中科慧眼科技有限公司 Non-standard small obstacle detection method and device and automatic driving system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
工程车专用盲区预警系统设计;刘汉鼎;张根源;朱祯贞;;科技展望;20170630(第18期);第129-130页 *

Also Published As

Publication number Publication date
CN114782923A (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN109117709B (en) Collision avoidance system for autonomous vehicles
CN108172025B (en) Driving assisting method and device, vehicle-mounted terminal and vehicle
CN106647776B (en) Method and device for judging lane changing trend of vehicle and computer storage medium
CN112307921A (en) Vehicle-mounted end multi-target identification tracking prediction method
JP5073548B2 (en) Vehicle environment recognition device and preceding vehicle tracking control system
US20070222566A1 (en) Vehicle surroundings monitoring apparatus, vehicle surroundings monitoring method, and vehicle surroundings monitoring program
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN111351474B (en) Vehicle moving target detection method, device and system
JP6700373B2 (en) Apparatus and method for learning object image packaging for artificial intelligence of video animation
CN114418895A (en) Driving assistance method and device, vehicle-mounted device and storage medium
CN112613434A (en) Road target detection method, device and storage medium
CN118038386B (en) Dynamic target detection system under high-density complex traffic scene
CN111985388A (en) Pedestrian attention detection driving assistance system, device and method
CN112215073A (en) Traffic marking line rapid identification and tracking method under high-speed motion scene
CN114419603A (en) Automatic driving vehicle control method and system and automatic driving vehicle
CN114782923B (en) Detection system for dead zone of vehicle
JP2020170319A (en) Detection device
JP2023529239A (en) A Computer-implemented Method for Multimodal Egocentric Future Prediction
CN117612117A (en) Roadside near weed segmentation method, system and medium based on vehicle-mounted recorder
CN115171428B (en) Vehicle cut-in early warning method based on visual perception
Liu et al. Virtual world bridges the real challenge: Automated data generation for autonomous driving
JP2017182139A (en) Determination apparatus, determination method, and determination program
Zhang et al. Smart-rain: A degradation evaluation dataset for autonomous driving in rain
JP7505596B2 (en) IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
Sato et al. Scene recognition for blind spot via road safety mirror and in-vehicle camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant