CN114782923A - Vehicle blind area detection system - Google Patents
Vehicle blind area detection system Download PDFInfo
- Publication number
- CN114782923A CN114782923A CN202210490050.7A CN202210490050A CN114782923A CN 114782923 A CN114782923 A CN 114782923A CN 202210490050 A CN202210490050 A CN 202210490050A CN 114782923 A CN114782923 A CN 114782923A
- Authority
- CN
- China
- Prior art keywords
- detection
- blind area
- segmentation
- branch
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 127
- 230000011218 segmentation Effects 0.000 claims abstract description 49
- 238000000034 method Methods 0.000 claims abstract description 31
- 238000012549 training Methods 0.000 claims abstract description 28
- 230000008569 process Effects 0.000 claims abstract description 16
- 238000002372 labelling Methods 0.000 claims abstract description 10
- 238000012544 monitoring process Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 3
- 230000004888 barrier function Effects 0.000 abstract description 5
- 238000012805 post-processing Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 9
- 206010039203 Road traffic accident Diseases 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000002051 biphasic effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a detection system for a vehicle blind area, which comprises the following steps: s1, collecting a blind area data set, and labeling a sample; s2, constructing a blind area detection and segmentation joint network structure based on a Yolov5 network, and improving a sample distribution strategy during detection branch training; the blind area detection and division combined network structure comprises two detection branches and a division branch; and S3, determining positive and negative samples in the training process according to the sample distribution strategy, and carrying out detection and regression combined training to obtain a trained detection model of the vehicle blind area. The method is based on the Yolov5 framework, a detection and segmentation combined multi-task network structure is customized, and the precision of a detection and segmentation algorithm can be improved at the same time in a training stage; improving a sample distribution strategy and improving the detection precision; in the post-processing stage, the detected target external frame is more stable by utilizing the division branch result, so that the accurate foothold of the barrier is obtained.
Description
Technical Field
The invention relates to the technical field of vehicle driving obstacle detection, in particular to a vehicle blind area detection system and an implementation method thereof.
Background
After the 21 st century, the development of the country is more and more rapid, and the increase of traffic accidents is caused by the sudden increase of the reserves of social vehicles and freight vehicles, wherein part of the traffic accidents are caused by the fact that a driver cannot see the real-time situation of a blind area because a large visual field blind area exists on the left side and the right side of some vehicles such as a muck truck, a truck and the like in the driving process, and the serious traffic accidents are caused by the fact that the driver easily collides with obstacles such as pedestrians, bicycles, electric vehicles and the like in the blind area when the driving state of the vehicles is changed.
This scheme is mainly to freight train, dregs car at the driving in-process, and the blind area invisible in the back of the left and right sides of vehicle is listened, confirms whether to have the barrier to and the accurate foothold of barrier on ground, if have the barrier, and its foothold is in dangerous within range, then need send out to report an emergency and ask for help or increased vigilance suggestion driver careful and speed down and go to ensure driving safety. The obstacles mainly comprise pedestrians and bicyclists.
In order to monitor the blind area in real time, many methods directly use radar or use a vision-based target detection method to monitor. The drawback of radar-based systems is that they are costly and do not allow the type of obstacle to be determined. The target detection method based on vision can effectively reduce cost, but the existing scheme has two disadvantages, namely, the detection accuracy is low; secondly, the detection frame has large fluctuation, so that the early warning cannot be carried out in real time and quickly after pedestrians appear in the alarm area; thirdly, the falling foot point of the obstacle cannot be accurately positioned, so that the actual position of the obstacle cannot be accurately positioned; fourthly, on the blind area monitoring task, the panorama segmentation precision is not as high as that of only paying attention to the foreground, and the panorama segmentation labeling cost is higher.
Disclosure of Invention
The invention aims to provide a vehicle blind area detection system, which assists a driver in automatically monitoring and accurately positioning obstacles in a blind area range in the driving process and can generate an alarm response in real time and quickly.
The invention provides a detection system for a vehicle blind area, which comprises a detection model for the vehicle blind area, wherein the realization process of the detection model comprises the following steps:
s1, collecting a blind area data set, labeling samples, and marking a foreground target surrounding frame, a foreground target category and a category to which a foreground segmentation pixel belongs of each picture sample; converting the labeled foreground object bounding box label into a yolo format label;
s2, constructing a blind area detection and segmentation joint network structure based on a Yolov5 network, and improving a sample distribution strategy during detection branch training;
the blind area detection and segmentation joint network structure comprises an input layer, a backbone network, a characteristic pyramid fusion layer, two detection branches and a segmentation branch; the two detection branches are respectively used for the distribution, classification and regression training of positive and negative samples and outputting foreground object classes, and the segmentation branch is used for further accurately obtaining the pixel level of the foreground object classes output by the detection branches so as to obtain the classes of foreground segmentation pixels;
the sample allocation strategy is: if the aspect ratio of the detection target is equal to 1, taking the anchor frame of the central point and two anchor frames closest to the central point as positive samples; if the aspect ratio of the detection target is less than 1, only taking the central point and the longitudinally nearest adjacent point, if the aspect ratio of the detection target is more than 1, only taking the central point and the transversely nearest adjacent point as positive samples, and taking the rest points as negative samples;
and S3, determining positive and negative samples in the training process according to the sample distribution strategy, and carrying out detection and regression combined training to obtain a trained detection model of the vehicle blind area.
Further, in step S3, the two detection branches include a sample detection classification branch and a regression branch, the sample detection classification branch is trained by using a binary cross entropy loss function, and the regression branch is trained by using a CIOU loss function; the division branches are trained by adopting a binary cross entropy loss function;
during training, the background label is set to be 0, the foreground object class labels are 1, 2 and 3 … n, and n is an integer larger than equal to 1.
Further, the deployment is used for executing the following blind area detection process:
acquiring a real-time blind area monitoring picture through a camera module;
reading a detection model file, and inputting a monitoring picture into a calling interface for forward reasoning;
acquiring a forward reasoning result, including a detection result and a segmentation result; the detection result is mainly obtained by decoding to obtain an obstacle enclosure frame; the segmentation result mainly comprises the steps of obtaining a foreground target, and then extracting the smallest enclosure frame to correct the detection frame result, so that the falling foot point of the obstacle on the ground is accurately obtained.
Further, for the right blind area of the vehicle, after obtaining the forward reasoning result, the method further includes:
and judging that the corrected detection frame belongs to a preset warning area of the second level according to the detection frame correction result, and giving early warning in time according to the warning level corresponding to the judgment result to prompt a driver to drive carefully.
One or more technical solutions provided in the embodiments of the present invention have at least the following technical effects or advantages: the invention is based on the multiple task network structure combining the dead zone detection and the segmentation of the Yolov5, the function that one network completes two tasks at the same time is realized, the two tasks are learned at the same time and mutually promoted, thereby improving the detection and the segmentation precision and the efficiency together; the network segmentation branch only segments the foreground, more foreground features can be concerned, the precision is greatly improved compared with the existing panoramic segmentation method, and the labeling cost of the segmentation task can be effectively reduced; an anchor frame distribution strategy during the training of the Yolov5 detection branches is improved, so that the detection branch precision is higher; in addition, the detected target external frame can be more stable by utilizing the division branch result, and the division can be accurately positioned to the landing foot point of the obstacle.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
The invention will be further described with reference to the following examples with reference to the accompanying drawings.
FIG. 1 is a flow chart of a method of implementing the system of the present invention;
FIG. 2 is a flowchart of collecting a blind area data set and labeling a sample according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a blind area detection and segmentation joint network structure according to an embodiment of the present invention;
FIG. 4 is a sample allocation strategy diagram for v3 and v4 versions of existing Yolov;
FIG. 5 is a sample allocation policy illustration of v5 version of existing Yolov;
FIG. 6 is a sample allocation policy diagram according to an embodiment of the present invention;
FIG. 7 is a flow chart of a vehicle blind zone detection process performed by the system of the present invention.
Detailed Description
The embodiment of the application provides a detection system for vehicle blind areas, assists automatic monitoring and accurate positioning of obstacles in a blind area range in a driver driving process, and can generate warning response in real time and quickly.
The technical scheme in the embodiment of the application has the following general idea: aiming at the characteristic that a blind area dangerous case needs to be alarmed quickly and in real time, based on a Yolov5 framework, an end-to-end blind area detection and division combined multi-task network structure is customized, the function that one network simultaneously completes two tasks is realized, the two tasks are learned simultaneously and mutually promoted, so that the detection and division precision is jointly improved, and the efficiency is improved; the network segmentation branch only segments the foreground, more foreground features can be concerned, the precision is greatly improved compared with the existing panoramic segmentation method, and the labeling cost of the segmentation task can be effectively reduced; an anchor frame distribution strategy during the training of the Yolov5 detection branches is improved, so that the detection branch precision is higher; in addition, the detected target external frame can be more stable by utilizing the division branch result, and the division can be accurately positioned to the landing foot point of the obstacle.
The implementation method of the system of the embodiment of the invention mainly comprises three parts:
the first part is used for collecting a blind area data set and sample labels;
the second part is that an anchor frame distribution strategy during detection branch training is constructed and improved based on a Yolov5 blind area detection and segmentation joint network structure;
and the third part is blind area detection and division of multitask system deployment and application alarm instances.
The picture data and the labeling data of the first part are transmitted to the joint model training of the second part; aiming at the visual characteristic of the blind area, the second part establishes a new combined network structure and improves the distribution strategy of the training anchor frame of the detection branch so as to achieve higher precision of the model; the third part is based on the multi-task model trained in the second part, and is combined with an example that an actual application alarm strategy is deployed on mobile equipment, the whole system can assist a driver to monitor the situation of a blind area in real time and rapidly in the driving process, and an alarm can be generated in time as long as an obstacle exists in the range of the blind area, so that the driver is reminded of driving carefully.
Example one
As shown in fig. 1, the present embodiment provides a vehicle blind area detection system, which includes a vehicle blind area detection model, and the implementation process of the detection model includes the following steps:
s1, collecting a blind area data set, carrying out sample labeling, and marking out a foreground target surrounding frame, a foreground target category and a category to which a foreground segmentation pixel belongs of each picture sample; converting the labeled foreground object bounding box label into a yolo format label;
as shown in fig. 2: installing a camera behind the right side of the vehicle to monitor a front blind area forwards, driving the vehicle on the road, recording a blind area video, and finally processing the blind area video into a sample picture; marking a foreground target surrounding frame and a category of each sample picture by using a marking tool, and marking the category of a foreground segmentation pixel, wherein the category mainly comprises pedestrians and bicyclists; and converting the labeled bounding box label into a yolo format label (x, y, w, h).
S2, constructing a blind area detection and segmentation joint network structure based on a Yolov5 network, and improving a sample distribution strategy during detection branch training;
as shown in fig. 3, the blind area detection and segmentation joint network structure includes an input layer, a backbone network, a feature pyramid fusion layer, two detection branches, and a segmentation branch; the two detection branches are respectively used for the distribution, classification and regression training of positive and negative samples and outputting foreground object classes, and the segmentation branch is used for further accurately obtaining the pixel level of the foreground object classes output by the detection branches so as to obtain the classes of foreground segmentation pixels;
as known to those skilled in the art, the underlying network of Yolov5 includes an input layer, a backbone network (CSPNet), and a feature pyramid fusion layer (FPN). According to the embodiment of the invention, according to the characteristic that most of targets in the blind area are small and medium targets, only 8-time and 16-time detection heads are reserved in the detection branches to form two detection branches which are respectively used for the distribution, classification and regression training of positive and negative samples, and the number of the preset frames Anchor is increased from 3 to 5, so that more marking frames can be covered as much as possible, and the training regression difficulty is reduced. As is well known, the detection frame of the model prediction is difficult to avoid fluctuation, and the detection frame cannot accurately give the falling foot point of the barrier, so that the fast and accurate alarm response required for the blind area detection is inconsistent. Therefore, the embodiment of the invention adds a segmentation branch on the basis of the original detection network structure, the main purpose of the segmentation branch is to provide the foreground category at the pixel level corresponding to the foreground category from the detection frame, and then the foreground target can be accurate to the pixel level, at this time, the smallest enclosing frame can be obtained according to the segmentation foreground target, so that the detection frame can be corrected, and meanwhile, the foot landing point of the obstacle on the ground can be accurately known from the segmented area. The reason why the human body is learned by two branches rather than directly by the segmentation method is that the two tasks are mutually promoted by simultaneous learning, thereby jointly improving the detection and segmentation accuracy.
The sample allocation policy is: if the aspect ratio of the detection target is equal to 1, taking the anchor frame of the central point and two anchor frames closest to the central point as positive samples; if the aspect ratio of the detection target is less than 1, only the central point and the nearest vertical point are taken, if the aspect ratio of the detection target is greater than 1, only the central point and the nearest horizontal point are taken as positive samples, and the rest points are taken as negative samples.
As mentioned before, for two detection branches, two subtasks are actually involved, namely a classification branch for the Anchor block (Anchor Box) and a regression branch for the Anchor block (Anchor Box). The purpose of placing the anchors here is to reduce the difficulty of regression, and usually the detection task will place W H N anchors at the output layer of the network (assuming that the size of the output layer feature map is W H). In a conventional typical detection algorithm, for example, the value of N is 9 in FasterRCNN, and the sample (positive and negative samples) to which the Anchor Box (Anchor Box) belongs is assigned to calculate whether the overlapping degree (interaction overview, IOU) between the Anchor Box (Anchor Box) and the real boundary Box (GT BBox) is greater than a preset threshold, where 0.5 is generally taken, and if greater, the sample is a positive sample, and if less, the sample is a negative sample. Since FasterRCNN is biphasic, the entire framework runs slower. The SSD algorithm was later developed with the same allocation strategy as fasterncn. However, the SSD whole algorithm is not as simple as Yolo, so a Yolo series appears, comparing popular v3, v4, v5, where the sample allocation strategies of v3 and v4 are the same, and v5 adopts a different sample allocation strategy for improving accuracy.
As shown in fig. 4, Yolov3 to v4 determine that the Anchor Box (Anchor Box) belongs to a positive sample, and after the center of the real boundary Box (GT BBox) falls into the feature map grid, the corresponding Anchor Box (Anchor Box) and the overlap (IOU) of the real boundary Box (GT BBox) are determined to be a positive sample if the overlap (interaction area unit, IOU) of the corresponding Anchor Box (Anchor Box) and the real boundary Box (GT BBox) is greater than a certain threshold. However, Yolov5 is improved on the basis of this allocation strategy, and specifically, as shown in fig. 5, not only the Anchor Box (Anchor Box) at the center point is used as a candidate positive sample, but also the two Anchor boxes (Anchor Box) with the nearest center point are used as positive samples, so that the number of positive samples of the v5 version of Yolov is 3-9 times that of v3 and v 4.
As shown in fig. 6, the embodiment of the present invention considers that the manner of allocating samples of Yolov5 should be related to the aspect ratio of the target, so the embodiment of the present invention proposes a sample allocation improved version, when the aspect ratio is equal to 1, the original Yolov5 allocation manner is still adopted, but when the aspect ratio is less than 1, only the central point and the nearest vertical adjacent point are taken, and when the aspect ratio is greater than 1, only the central point and the nearest horizontal adjacent point are taken, so that the quality of being selected as a positive sample can be greatly ensured.
And S3, determining positive and negative samples in the training process according to the sample distribution strategy, and performing detection and regression combined training to obtain a trained detection model of the vehicle blind area.
With the sample distribution strategy of the detection branches, positive and negative samples in the training process are determined, at the moment, the sample detection classification branch in the two detection branches is trained by adopting a binary cross entropy loss function, the regression branch is trained by adopting a CIOU loss function, and in addition, the segmentation branch is also trained by adopting the binary cross entropy loss function. During training, the background label is set to be 0, the foreground object class labels are 1, 2 and 3 … n, and n is an integer larger than equal to 1.
The trained system needs to be deployed for use, the system of the embodiment of the invention can be deployed to a mobile equipment platform, an embedded Haisi (Hisi) equipment platform is taken as an exemplary platform, and other platforms can be used. Specifically, during deployment, a quantization tool provided by the combined model on a Hisi platform needs to be quantized to obtain a quantization model file, and the quantized combined model is called through an NNIE SDK call interface to perform inference.
As further shown in fig. 7, the deployment is configured to perform the following blind spot detection procedure:
after initialization, acquiring a real-time blind area monitoring picture through a camera module;
reading a detection model file (an embedded Haisi can be read by using an NNIE SDK software interface), and inputting a monitoring picture into a calling interface for forward reasoning;
acquiring a forward reasoning result, including a detection result and a segmentation result; the detection result is mainly decoded to obtain an obstacle enclosure frame; the segmentation result mainly comprises the steps of obtaining a foreground target, and then extracting the smallest enclosure frame to correct the detection frame result, so that the falling foot point of the obstacle on the ground is accurately obtained.
The above-mentioned blind area detection process can be carried out in a circulating manner, and to the right side blind area of the vehicle, can also include after obtaining the forward reasoning result:
and judging that the corrected detection frame belongs to a preset warning area of the second level according to the detection frame correction result, and giving early warning in time according to the warning level corresponding to the judgment result to prompt a driver to drive carefully.
The obtained forward reasoning result comprises a segmentation result, namely the segmentation branch result can be utilized to enable the detected target external frame to be more stable, and the accurate foot-falling point of the obstacle can be obtained.
The technical scheme provided by the embodiment of the invention at least has the following technical effects or advantages: the invention is based on the Yolov5 blind area detection and division combined multi-task network structure, realizes the function that one network can complete two tasks at the same time, and the two tasks can be mutually promoted when being learned at the same time, thereby improving the detection and division precision and improving the efficiency; the network segmentation branch only segments the foreground, more foreground features can be concerned, the precision is greatly improved compared with the existing panoramic segmentation method, and the labeling cost of the segmentation task can be effectively reduced; an anchor frame distribution strategy during Yolov5 detection branch training is improved, so that the detection branch precision is higher; in addition, the detected target external frame can be more stable by utilizing the division branch result, and the division can be accurately positioned to the landing foot point of the obstacle.
While specific embodiments of the invention have been described, it will be understood by those skilled in the art that the specific embodiments described are illustrative only and are not limiting upon the scope of the invention, as equivalent modifications and variations as will be made by those skilled in the art in light of the spirit of the invention are intended to be included within the scope of the appended claims.
Claims (4)
1. A detection system of vehicle blind area which characterized in that: the method comprises a detection model of a vehicle blind area, and the implementation process of the detection model comprises the following steps:
s1, collecting a blind area data set, labeling samples, and marking a foreground target surrounding frame, a foreground target category and a category to which a foreground segmentation pixel belongs of each picture sample; converting the labeled foreground object bounding box label into a yolo format label;
s2, constructing a blind area detection and segmentation joint network structure based on a Yolov5 network, and improving a sample distribution strategy during detection branch training;
the blind area detection and division combined network structure comprises an input layer, a backbone network, a characteristic pyramid fusion layer, two detection branches and a division branch; the two detection branches are respectively used for the distribution, classification and regression training of positive and negative samples and outputting foreground object classes, and the segmentation branch is used for further accurately obtaining the pixel level of the foreground object classes output by the detection branches so as to obtain the classes of foreground segmentation pixels;
the sample allocation policy is: if the aspect ratio of the detection target is equal to 1, taking the anchor frame of the central point and two anchor frames closest to the central point as positive samples; if the aspect ratio of the detection target is less than 1, only taking the central point and the longitudinally nearest adjacent point, if the aspect ratio of the detection target is more than 1, only taking the central point and the transversely nearest adjacent point as positive samples, and taking the rest points as negative samples;
and S3, determining positive and negative samples in the training process according to the sample distribution strategy, and performing detection and regression combined training to obtain a trained detection model of the vehicle blind area.
2. A vehicle blind area detection system according to claim 1, characterized in that: in step S3, the two detection branches include a sample detection classification branch and a regression branch, the sample detection classification branch is trained by using a binary cross entropy loss function, and the regression branch is trained by using a CIOU loss function; the division branches are trained by adopting a binary cross entropy loss function;
during training, the background label is set to 0, the foreground object class labels are 1, 2 and 3 … n, and n is an integer larger than 1.
3. A vehicle blind area detection system according to claim 1, characterized in that: the method is used for executing the following blind area detection process after deployment:
acquiring a real-time blind area monitoring picture through a camera module;
reading a detection model file, and inputting a monitoring picture into a calling interface for forward reasoning;
acquiring a forward reasoning result, including a detection result and a segmentation result; the detection result is mainly decoded to obtain an obstacle enclosure frame; the segmentation result mainly comprises the steps of obtaining a foreground target, and then extracting the smallest surrounding frame to correct the detection frame result, so that the falling foot point of the obstacle on the ground is accurately obtained.
4. A vehicle blind spot detection system according to claim 3, wherein: for the right blind area of the vehicle, after obtaining the forward reasoning result, the method further comprises the following steps:
and judging that the corrected detection frame belongs to a preset warning area of the second level according to the detection frame correction result, and giving early warning in time according to the warning level corresponding to the judgment result to prompt a driver to drive carefully.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210490050.7A CN114782923B (en) | 2022-05-07 | 2022-05-07 | Detection system for dead zone of vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210490050.7A CN114782923B (en) | 2022-05-07 | 2022-05-07 | Detection system for dead zone of vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114782923A true CN114782923A (en) | 2022-07-22 |
CN114782923B CN114782923B (en) | 2024-05-03 |
Family
ID=82435932
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210490050.7A Active CN114782923B (en) | 2022-05-07 | 2022-05-07 | Detection system for dead zone of vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114782923B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110135398A (en) * | 2019-05-28 | 2019-08-16 | 厦门瑞为信息技术有限公司 | Both hands off-direction disk detection method based on computer vision |
CN110942000A (en) * | 2019-11-13 | 2020-03-31 | 南京理工大学 | Unmanned vehicle target detection method based on deep learning |
CN111738056A (en) * | 2020-04-27 | 2020-10-02 | 浙江万里学院 | Heavy truck blind area target detection method based on improved YOLO v3 |
WO2021004077A1 (en) * | 2019-07-09 | 2021-01-14 | 华为技术有限公司 | Method and apparatus for detecting blind areas of vehicle |
CN113156421A (en) * | 2021-04-07 | 2021-07-23 | 南京邮电大学 | Obstacle detection method based on information fusion of millimeter wave radar and camera |
CN113657161A (en) * | 2021-07-15 | 2021-11-16 | 北京中科慧眼科技有限公司 | Non-standard small obstacle detection method and device and automatic driving system |
-
2022
- 2022-05-07 CN CN202210490050.7A patent/CN114782923B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110135398A (en) * | 2019-05-28 | 2019-08-16 | 厦门瑞为信息技术有限公司 | Both hands off-direction disk detection method based on computer vision |
WO2021004077A1 (en) * | 2019-07-09 | 2021-01-14 | 华为技术有限公司 | Method and apparatus for detecting blind areas of vehicle |
CN110942000A (en) * | 2019-11-13 | 2020-03-31 | 南京理工大学 | Unmanned vehicle target detection method based on deep learning |
CN111738056A (en) * | 2020-04-27 | 2020-10-02 | 浙江万里学院 | Heavy truck blind area target detection method based on improved YOLO v3 |
CN113156421A (en) * | 2021-04-07 | 2021-07-23 | 南京邮电大学 | Obstacle detection method based on information fusion of millimeter wave radar and camera |
CN113657161A (en) * | 2021-07-15 | 2021-11-16 | 北京中科慧眼科技有限公司 | Non-standard small obstacle detection method and device and automatic driving system |
Non-Patent Citations (1)
Title |
---|
刘汉鼎;张根源;朱祯贞;: "工程车专用盲区预警系统设计", 科技展望, no. 18, 30 June 2017 (2017-06-30), pages 129 - 130 * |
Also Published As
Publication number | Publication date |
---|---|
CN114782923B (en) | 2024-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109117709B (en) | Collision avoidance system for autonomous vehicles | |
CN106647776B (en) | Method and device for judging lane changing trend of vehicle and computer storage medium | |
CN112424793B (en) | Object recognition method, object recognition device and electronic equipment | |
CN111507162B (en) | Blind spot warning method and device based on cooperation of inter-vehicle communication | |
CN113343461A (en) | Simulation method and device for automatic driving vehicle, electronic equipment and storage medium | |
CN111967396A (en) | Processing method, device and equipment for obstacle detection and storage medium | |
KR102355431B1 (en) | AI based emergencies detection method and system | |
CN111351474B (en) | Vehicle moving target detection method, device and system | |
CN111985388B (en) | Pedestrian attention detection driving assistance system, device and method | |
CN114418895A (en) | Driving assistance method and device, vehicle-mounted device and storage medium | |
EP2741234B1 (en) | Object localization using vertical symmetry | |
CN114267082A (en) | Bridge side falling behavior identification method based on deep understanding | |
CN114067292A (en) | Image processing method and device for intelligent driving | |
CN112769877A (en) | Group fog early warning method, cloud server, vehicle and medium | |
CN114419603A (en) | Automatic driving vehicle control method and system and automatic driving vehicle | |
CN114972911A (en) | Method and equipment for collecting and processing output data of automatic driving perception algorithm model | |
CN111767850A (en) | Method and device for monitoring emergency, electronic equipment and medium | |
JP2024515761A (en) | Data-driven dynamically reconstructed disparity maps | |
CN117612117A (en) | Roadside near weed segmentation method, system and medium based on vehicle-mounted recorder | |
CN117416349A (en) | Automatic driving risk pre-judging system and method based on improved YOLOV7-Tiny and SS-LSTM in V2X environment | |
CN114782923A (en) | Vehicle blind area detection system | |
Sato et al. | Scene recognition for blind spot via road safety mirror and in-vehicle camera | |
CN115171428B (en) | Vehicle cut-in early warning method based on visual perception | |
Siddiqui et al. | Object/Obstacles detection system for self-driving cars | |
CN110177222B (en) | Camera exposure parameter adjusting method and device combining idle resources of vehicle machine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |