CN117152513A - Vehicle boundary positioning method for night scene - Google Patents

Vehicle boundary positioning method for night scene Download PDF

Info

Publication number
CN117152513A
CN117152513A CN202311105460.6A CN202311105460A CN117152513A CN 117152513 A CN117152513 A CN 117152513A CN 202311105460 A CN202311105460 A CN 202311105460A CN 117152513 A CN117152513 A CN 117152513A
Authority
CN
China
Prior art keywords
model
vehicle
positioning
image data
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311105460.6A
Other languages
Chinese (zh)
Inventor
董振江
李浩然
董建阔
亓晋
孙雁飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202311105460.6A priority Critical patent/CN117152513A/en
Publication of CN117152513A publication Critical patent/CN117152513A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Abstract

The invention belongs to the technical field of computer vision, and discloses a vehicle boundary positioning method for night scenes, which is used for collecting image data of night road scenes, carrying out frame marking on the preprocessed image data by using a detection model, and training the detection model by using a tag data training set to obtain a final teacher model; constructing a student model, distilling boundary positioning knowledge learned by a teacher model by using a positioning distillation method, and finally obtaining a lightweight network model; and finally obtaining the vehicle position of the night scene. The invention effectively solves the problem of difficult positioning of the fuzzy boundary by using positioning distillation, improves the positioning precision of the detection model, and can realize more accurate positioning by detecting surrounding vehicles in real time when the detection model is arranged in a vehicle-mounted 360-degree panoramic image system so as to avoid traffic accidents or provide better implementation conditions for automatic driving.

Description

Vehicle boundary positioning method for night scene
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a vehicle boundary positioning method for night scenes.
Background
Today, the development and popularization of artificial intelligence technology are increasing, and the computer vision technology is also becoming a hot research field in the field of artificial intelligence; among them, the image processing technology is widely and deeply applied in the intelligent traffic field, such as unmanned vehicles, unmanned logistics vehicles, road traffic monitoring, etc., which make the vehicle detection system based on the deep learning method more and more important. In the field of computer vision, vehicle bounding box position determination is an important problem, and the aim is to accurately position and determine the size of a vehicle on an image; however, the existing night image recognition system has the problem of poor recognition effect, and the main reason is the influence of light sources such as lamps of opposite vehicles or street lamps on roads on the system. The night illumination is unevenly distributed, no street lamp or vehicles far away can be darker, and vehicles near away can be too bright because of opposite vehicle lamps or street lamps, the change of the brightness can easily cause errors of a night recognition system of the automobile, the recognition accuracy is low, the boundary frame of the vehicle generates ambiguity, the boundary of the vehicle and the night environment background are easily integrated into a whole, the boundary of the vehicle cannot be well defined, the positioning accuracy of the boundary frame of the vehicle is reduced, and the like. In addition, the computational power of the hardware of the mobile terminal carried by the vehicle is generally low, and a lighter model is required for the network model. These seriously affect the detection accuracy and real-time performance of the vehicle boundary detection algorithm. Therefore, how to achieve accurate positioning of vehicle bounding boxes in night scenes and make network models as small as possible is an urgent need for current computer vision research.
Patent application CN115171079a discloses a night scene based vehicle detection method, which utilizes dynamic filter networks to generate sample-specific convolution kernels, and uses different enhancement methods to constrain each enhancement subnetwork for different night image samples; however, the model realized by the method does not have a measure for light weight, the volume of the model is larger, the calculation force is required to be relatively strong, and when the model is actually deployed on a vehicle, the detection speed is possibly reduced and even the detection fails due to low calculation force of vehicle-mounted equipment, so that the model cannot be deployed on a similar low calculation force terminal; it also does not optimize for ambiguity of boundaries in a night-time environment, and in a real scenario, when a large vehicle such as a large truck is encountered, it is impossible to efficiently and accurately locate the boundaries of the vehicle, and thus a series of traffic accidents may occur due to inaccurate judgment of the boundaries of the vehicle.
Disclosure of Invention
In order to solve the technical problems, the invention provides a night scene-oriented vehicle boundary positioning method, which enhances the night vehicle boundary positioning effect and simultaneously lightens a network model.
The invention discloses a vehicle boundary positioning method facing a night scene, which comprises the following steps:
s1, acquiring image data of a night road scene;
s2, dividing the acquired image data into a training set, a verification set and a test set, and preprocessing the image data;
s3, performing frame labeling on the preprocessed image data by using a detection model to obtain a label image dataset; training the detection model by using the label image data set and the training set to obtain a final teacher model;
s4, constructing a student model, and distilling boundary positioning knowledge learned by a teacher model by using a positioning distillation method to finally obtain a lightweight network model;
s5, inputting the test set data into a lightweight network model, outputting position coordinate information of the vehicle, setting a threshold value filtering positioning frame, and deleting the repeated positioning frame by using non-maximum value inhibition to finally obtain the vehicle position of the night scene.
In the implementation process, the selected model can be replaced, and the teacher model can be any larger detection model with more parameters.
Further, S2 is specifically:
image data is acquired by using a vehicle-mounted camera or a road monitoring camera, an image shot by the camera is originally in a RAW format, and the format records original information of a camera sensor, including shutter speed aperture values and the like of the camera; the original format is used as input by using the deep learning network, which is equivalent to the introduction of the information, and the conventional RGB format picture with better effect after illumination enhancement can be output.
Further, S3 is specifically:
labeling the image data by using a detection model to obtain a labeling result, and correcting the labeling result; the obtained label is the coordinate information of the positioning frame, namely coordinate points of four corners of the rectangular frame, and the label can be visualized by using a tool; the error positioning frame can be seen by naked eyes that the boundary frame does not frame the target or the position deviation is larger, and the boundary frame is deleted manually;
and continuing to train the detection model by taking the corrected labeling data as training data until the detection model converges.
Further, the construction of the student model is specifically as follows:
the initial stage is normalized by Batch Normalization after convolution operation, then the feature map is reduced in size by a ReLU activation function and then a maximum pooling layer;
the model then falls into three phases: the first stage comprises a downsampling module and 4 basic modules which are sequentially connected; the second stage is sequentially connected with a downsampling module and 8 basic modules, and features are further extracted and processed; the third stage comprises a downsampling module and 4 basic modules, which are used for deepening the expression capacity of the features; in each stage, the base module consists of convolution, normalization, activation functions and a CBAM attention module; the connection between the basic modules is realized through a channel rearrangement mechanism;
then, the model is normalized by convolution and Batch Normalization again, and global average pooling is carried out, so that the feature map is converted into a fixed size;
finally, the features are mapped to the final output category through the full connection layer, and a prediction result is obtained.
Further, S4 is specifically: inputting the labeling result, the training set and the image data of the verification set into a teacher model and a student model simultaneously to respectively obtain vehicle positioning frame information;
taking the difference between the vehicle positioning frame information output by the teacher model and the vehicle positioning frame information output by the student model as a first part LOSS1 of the LOSS function; taking the difference between the result output by the student model and the real label as a second part LOSS2 of the LOSS function; taking the difference between the confidence coefficient of the positioning frame output by the student model and the confidence coefficient of the positioning frame output by the teacher model as a third part LOSS3 of the LOSS function; the LOSS1, LOSS2, LOSS3 all use a smooth L1 LOSS function; and training the student model by taking the weighted summation of LOSS1, LOSS2 and LOSS3 as a final LOSS function of the student model until convergence, so as to obtain a final lightweight network model. The LOSS1 and LOSS2 enable the student model to learn the ability of defining the boundary more accurately, and the LOSS3 enables the student model to learn the judgment ability of the teacher model on the confidence coefficient of the boundary frame.
The beneficial effects of the invention are as follows: aiming at the problem of fuzzy vehicle boundary frames in night scenes, the method adopts a teacher-student model to process vehicle data acquired at night; firstly, marking by using a detection model, carrying out manual auditing and correction after a marking result comes out, taking corrected marking data as training data to train the detection model, using the trained detection model to mark again, repeating the processes to realize iterative marking and training, and finally obtaining that the model convergence is obtained when the performance and marking quality of the model are close to a stable state; the strategy can finally achieve the purposes of data labeling and model training at the same time, and the trained model can be used as a teacher model; the constructed student model has compact and efficient structure, greatly reduces the calculation and memory expenditure through the technologies of channel rearrangement, grouping convolution and the like, and is suitable for equipment with limited resources; and meanwhile, the CBAM attention module is adopted, so that the characteristic representation capability of the model is further enhanced, and the performance and accuracy are improved. The invention introduces a positioning distillation technology based on a low calculation force platform with higher algorithm deployment requirement, so as to effectively solve the problem of difficult positioning of a fuzzy boundary, and prevent vehicle collision or even collision caused by inaccurate boundary judgment because of the compatibility of the vehicle and the background; the positioning accuracy of the detection model is improved, and the vehicle violation conditions in the intelligent traffic scene can be accurately judged; the vehicle-mounted 360-degree panoramic image system can realize more accurate positioning through real-time detection of surrounding vehicles, so as to avoid traffic accidents or provide better implementation conditions for automatic driving.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a student model training architecture;
fig. 3 is a schematic diagram of a student model structure.
Detailed Description
In order that the invention may be more readily understood, a more particular description of the invention will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
As shown in fig. 1 and 2, the vehicle boundary positioning method facing the night scene of the present invention includes the following steps:
s1: the method comprises the steps of collecting data sets at 7 intersections of urban roads, wherein each intersection is provided with two cameras, the cameras are set to estimate the distance between vehicles when the approaching of the vehicles is recognized, the vehicles are shot when the distance between the vehicles and the cameras is 3 meters, 6 meters and 10 meters respectively, the picture information of the vehicles passing through the time period from 8 nights to 2 nights in the morning is recorded, and the image data of the vehicles in the dark environment are collected for 10 ten thousand total days after one week of collection and screening.
S2, dividing the acquired image data into a training set, a verification set and a test set, and preprocessing the image data;
s3, performing frame labeling on the preprocessed image data by using a detection model to obtain a label image dataset; training the detection model by using the label image data set and the training set to obtain a final teacher model;
s4, constructing a student model, wherein the student model is an efficient neural network model which receives RGB images as input as shown in FIG. 3. The initial stage is normalized by Batch Normalization after convolution operation, then the initial stage is subjected to a ReLU activation function and then a maximum pooling layer, so that the size of the feature map is effectively reduced. Subsequently, the model is divided into three phases. The first stage consists of one downsampling module (DownsampleUnit) and 4 basic modules (basic unit), which are connected to each other in order. The second stage consists of a downsampling module and 8 basic modules, which are connected in turn to further extract and process features. The third stage comprises a downsampling module and 4 basic modules for deepening the expressive power of the features. In each stage, the basic module (basicmit) is the core of the model, consisting of convolution, normalization, activation functions, and CBAM attention modules, etc. The connection between the basic modules is realized through a channel rearrangement mechanism, so that the exchange of the features and the information flow are effectively promoted, and the capability of feature representation is improved.
After three phases, the model is again convolved and Batch Normalization normalized, followed by global averaging pooling to convert the feature map to a fixed size, ready for the fully connected layer. Finally, the features are mapped to the final output category through the full connection layer, and a prediction result is obtained.
As shown in fig. 2, the boundary positioning knowledge learned by the teacher model is distilled by using a positioning distillation method, the labeling result and the image data of the training set and the verification set are simultaneously input into the teacher model and the student model to respectively obtain the vehicle positioning frame information, and finally the lightweight network model is obtained.
Taking the difference between the vehicle positioning frame information output by the teacher model and the vehicle positioning frame information output by the student model as a first part LOSS1 of the LOSS function; taking the difference between the result output by the student model and the real label as a second part LOSS2 of the LOSS function; taking the difference between the confidence coefficient of the positioning frame output by the student model and the confidence coefficient of the positioning frame output by the teacher model as a third part LOSS3 of the LOSS function; the LOSS1, LOSS2, LOSS3 all use a smooth L1 LOSS function; and training the student model by taking the weighted summation of LOSS1, LOSS2 and LOSS3 as a final LOSS function of the student model until convergence, so as to obtain a final lightweight network model. The LOSS1 and LOSS2 enable the student model to learn the ability of defining the boundary more accurately, and the LOSS3 enables the student model to learn the judgment ability of the teacher model on the confidence coefficient of the boundary frame.
S5, inputting the test set data into a lightweight network model, outputting position coordinate information of the vehicle, setting a threshold value filtering positioning frame, and deleting the repeated positioning frame by using non-maximum value inhibition to finally obtain the vehicle position of the night scene.
Specifically, S2: for the acquired image information, firstly dividing the image data into a training set, a verification set and a test set according to the ratio of 8:1:1, then preprocessing the acquired image information by adopting a traditional image processing method, enabling human eyes to distinguish the boundary of a vehicle more easily by using Gaussian filtering, facilitating the follow-up labeling work, then carrying out first round detection on all data by using the weight of a vehicle identification model pre-trained by a Yolov5 model to obtain a detection result label file corresponding to the image, then using a picture labeling tool, importing the data set subjected to Gaussian filtering processing in the step S2 and the obtained label file into the tool, adding an undetected positioning frame by deleting a positioning frame with detection errors, and finely adjusting the boundary of the vehicle positioning frame to accurately define the boundary of the vehicle, thereby manually completing the second round labeling of the image.
Step S3: selecting a YoloV5 model as a teacher model, inputting a label generated by labeling and a training set and a verification set in unprocessed original pictures into the model to train the model, taking a smooth L1 loss function of a model detection result of a vehicle position and an actual labeling result as a model loss function, using a pre-training model weight file Yolov5m. Pt as an initial weight to reduce the time and data quantity used for training, training 300 rounds on a GPU server with the model of Yingwei reaching 3090, and selecting a model weight file with the highest accuracy as a final teacher model.
Step S4: and selecting a SAD (ShufferAttentionDet) network model as a student model, inputting a label generated by labeling and a training set and a verification set in unprocessed original pictures into a teacher model and the student model, taking a weighted sum of a smooth L1 loss function of vehicle position information output by the teacher model and vehicle position information output by the student model and a smooth L1 loss function of confidence degrees respectively output by the two models as a training loss function, updating the student network only during counter propagation, training 200 rounds, and screening out the model weight with the highest accuracy rate from the training results as a final student model.
Step S5: inputting the data of the test set into the obtained student model, outputting the position coordinate information of the vehicle, setting the threshold value to 0.5, filtering the positioning frames with the confidence coefficient lower than 0.5, and deleting the redundant repeated positioning frames output in the network by using the non-maximum value inhibition, so as to finally obtain the vehicle position of the night scene.
The foregoing is merely a preferred embodiment of the present invention, and is not intended to limit the present invention, and all equivalent variations using the description and drawings of the present invention are within the scope of the present invention.

Claims (5)

1. A night scene-oriented vehicle boundary locating method, comprising the steps of:
s1, acquiring image data of a night road scene;
s2, dividing the acquired image data into a training set, a verification set and a test set, and preprocessing the image data;
s3, performing frame labeling on the preprocessed image data by using a detection model to obtain a label image dataset; training the detection model by using the label image data set and the training set to obtain a final teacher model;
s4, constructing a student model, and distilling boundary positioning knowledge learned by a teacher model by using a positioning distillation method to finally obtain a lightweight network model;
s5, inputting the test set data into a lightweight network model, outputting position coordinate information of the vehicle, setting a threshold value filtering positioning frame, and deleting the repeated positioning frame by using non-maximum value inhibition to finally obtain the vehicle position of the night scene.
2. The night scene-oriented vehicle boundary positioning method according to claim 1, wherein S2 specifically comprises:
image data is collected by using a vehicle-mounted camera or a road monitoring camera, and the original image data is in a RAW format; and using the deep learning network, taking the original format as input, and outputting the RGB format picture with enhanced illumination.
3. The night scene-oriented vehicle boundary positioning method according to claim 1, wherein S3 specifically comprises:
marking the image data by using a detection model to obtain a marking result, namely coordinate information of a vehicle positioning frame, correcting the marking result, and deleting the error positioning frame;
and continuing to train the detection model by taking the corrected labeling data as training data until the detection model converges, so as to obtain a final teacher model.
4. The night scene oriented vehicle boundary positioning method of claim 3, wherein constructing the student model is specifically:
the initial stage is normalized by Batch Normalization after convolution operation, then the feature map is reduced in size by a ReLU activation function and then a maximum pooling layer;
the model then falls into three phases: the first stage comprises a downsampling module and 4 basic modules which are sequentially connected; the second stage is sequentially connected with a downsampling module and 8 basic modules, and features are further extracted and processed; the third stage comprises a downsampling module and 4 basic modules, which are used for deepening the expression capacity of the features; in each stage, the base module consists of convolution, normalization, activation functions and a CBAM attention module; the connection between the basic modules is realized through a channel rearrangement mechanism;
then, the model is normalized by convolution and Batch Normalization again, and global average pooling is carried out, so that the feature map is converted into a fixed size;
finally, the features are mapped to the final output category through the full connection layer, and a prediction result is obtained.
5. The night scene oriented vehicle boundary positioning method according to claim 4, wherein S4 specifically is:
inputting the labeling result, the training set and the image data of the verification set into a teacher model and a student model simultaneously to respectively obtain vehicle positioning frame information;
taking the difference between the vehicle positioning frame information output by the teacher model and the vehicle positioning frame information output by the student model as a first part LOSS1 of the LOSS function; taking the difference between the result output by the student model and the real label as a second part LOSS2 of the LOSS function; taking the difference between the confidence coefficient of the positioning frame output by the student model and the confidence coefficient of the positioning frame output by the teacher model as a third part LOSS3 of the LOSS function; the LOSS1, LOSS2, LOSS3 all use a smooth L1 LOSS function; and training the student model by taking the weighted summation of LOSS1, LOSS2 and LOSS3 as a final LOSS function of the student model until convergence, so as to obtain a final lightweight network model.
CN202311105460.6A 2023-08-30 2023-08-30 Vehicle boundary positioning method for night scene Pending CN117152513A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311105460.6A CN117152513A (en) 2023-08-30 2023-08-30 Vehicle boundary positioning method for night scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311105460.6A CN117152513A (en) 2023-08-30 2023-08-30 Vehicle boundary positioning method for night scene

Publications (1)

Publication Number Publication Date
CN117152513A true CN117152513A (en) 2023-12-01

Family

ID=88911280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311105460.6A Pending CN117152513A (en) 2023-08-30 2023-08-30 Vehicle boundary positioning method for night scene

Country Status (1)

Country Link
CN (1) CN117152513A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372819A (en) * 2023-12-07 2024-01-09 神思电子技术股份有限公司 Target detection increment learning method, device and medium for limited model space

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372819A (en) * 2023-12-07 2024-01-09 神思电子技术股份有限公司 Target detection increment learning method, device and medium for limited model space
CN117372819B (en) * 2023-12-07 2024-02-20 神思电子技术股份有限公司 Target detection increment learning method, device and medium for limited model space

Similar Documents

Publication Publication Date Title
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN110163187B (en) F-RCNN-based remote traffic sign detection and identification method
CN111310862B (en) Image enhancement-based deep neural network license plate positioning method in complex environment
CN110399856B (en) Feature extraction network training method, image processing method, device and equipment
CN107563372B (en) License plate positioning method based on deep learning SSD frame
CN112464910A (en) Traffic sign identification method based on YOLO v4-tiny
CN111814621A (en) Multi-scale vehicle and pedestrian detection method and device based on attention mechanism
CN104700099A (en) Method and device for recognizing traffic signs
DE102012218390A1 (en) Optimizing the detection of objects in images
CN110659601B (en) Depth full convolution network remote sensing image dense vehicle detection method based on central point
CN112287896A (en) Unmanned aerial vehicle aerial image target detection method and system based on deep learning
CN117152513A (en) Vehicle boundary positioning method for night scene
CN113052159A (en) Image identification method, device, equipment and computer storage medium
CN114913498A (en) Parallel multi-scale feature aggregation lane line detection method based on key point estimation
CN110852358A (en) Vehicle type distinguishing method based on deep learning
CN112785610B (en) Lane line semantic segmentation method integrating low-level features
CN112509321A (en) Unmanned aerial vehicle-based driving control method and system for urban complex traffic situation and readable storage medium
CN112308066A (en) License plate recognition system
CN112364864A (en) License plate recognition method and device, electronic equipment and storage medium
CN112288701A (en) Intelligent traffic image detection method
CN116597270A (en) Road damage target detection method based on attention mechanism integrated learning network
CN116434056A (en) Target identification method and system based on radar fusion and electronic equipment
CN114550016B (en) Unmanned aerial vehicle positioning method and system based on context information perception
CN114882469A (en) Traffic sign detection method and system based on DL-SSD model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination