CN113657277A - System and method for judging shielded state of vehicle - Google Patents

System and method for judging shielded state of vehicle Download PDF

Info

Publication number
CN113657277A
CN113657277A CN202110947023.3A CN202110947023A CN113657277A CN 113657277 A CN113657277 A CN 113657277A CN 202110947023 A CN202110947023 A CN 202110947023A CN 113657277 A CN113657277 A CN 113657277A
Authority
CN
China
Prior art keywords
vehicle
target
shielded
information
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110947023.3A
Other languages
Chinese (zh)
Inventor
季思文
周翔
刘国清
朱晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Youjia Technology Co ltd
Original Assignee
Nanjing Youjia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Youjia Technology Co ltd filed Critical Nanjing Youjia Technology Co ltd
Priority to CN202110947023.3A priority Critical patent/CN113657277A/en
Publication of CN113657277A publication Critical patent/CN113657277A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a system and a method for judging the sheltered state of a vehicle, belonging to the technical field of vehicle auxiliary driving systems, wherein the method comprises the following steps: acquiring a target image; carrying out target detection on the target image to obtain position information of a target vehicle; inputting the position information into a deep neural network for feature extraction to obtain shielded information of the target vehicle; the shielded information of the target vehicle comprises shielded state information, shielded position information and shielded degree information. According to the method and the device, the sheltered state of the vehicle is judged in a grading manner, so that more comprehensive information of the sheltered state of the vehicle can be provided for the auxiliary driving system, and the accuracy of judging the sheltered state of the vehicle is effectively improved.

Description

System and method for judging shielded state of vehicle
Technical Field
The invention relates to a system and a method for judging a sheltered state of a vehicle, and belongs to the technical field of vehicle auxiliary driving systems.
Background
In a vehicle-mounted driving assistance system, it is necessary to accurately determine a motion state of a vehicle on a road ahead. According to a conventional process, firstly, a vehicle on a road needs to be detected through a detection network, and then refined vehicle type classification and target positioning are carried out on a detected target vehicle. If the target vehicle is shielded by other vehicles or obstacles, the vehicle type and position information of the target vehicle are easy to be wrong, so that the whole driving system has errors in judging the state information of the target vehicle, thereby causing hidden dangers to the driving safety of the vehicle, and how to accurately acquire the shielding state of the target vehicle becomes an indispensable part in a vehicle-mounted auxiliary driving system.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a system and a method for judging the shielded state of a vehicle.
In order to achieve the purpose, the invention is realized by adopting the following technical scheme:
in a first aspect, the present invention provides a method for determining a blocked state of a vehicle, including:
acquiring a target image;
carrying out target detection on the target image to obtain position information of a target vehicle;
inputting the position information into a deep neural network for feature extraction to obtain shielded information of the target vehicle;
the shielded information of the target vehicle comprises shielded state information, shielded position information and shielded degree information.
Furthermore, after the vehicle condition information right in front of the vehicle in the driving process of the vehicle is collected, the video image is decompressed frame by frame to obtain the target image.
Further, after the target image is subjected to target detection by a YOLOV5 target detection algorithm, the position information of all target vehicles in the target image is obtained.
Further, after the position information is cut to obtain an image block of the target vehicle, the image block is zoomed to a specified scale and sent to a deep neural network for feature extraction.
Further, the occluded state information includes occluded and non-occluded.
Further, the shielded position information includes left shielding, right shielding and two-side shielding.
Further, the shielded degree information is a ratio of a shielded portion of the target vehicle to a shared portion between the target vehicle and the obstacle.
In a second aspect, the present invention provides a system for determining a blocked state of a vehicle, including:
an image acquisition module: acquiring a target image;
a target vehicle detection module: carrying out target detection on the target image to obtain position information of a target vehicle;
the vehicle is sheltered from the judgement module: inputting the position information into a deep neural network for feature extraction to obtain shielded information of the target vehicle;
the shielded information of the target vehicle comprises shielded state information, shielded position information and shielded degree information.
In a third aspect, a device for determining a blocked state of a vehicle comprises a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to any of the above.
In a fourth aspect, a computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of any of the methods described above.
Compared with the prior art, the invention has the following beneficial effects:
according to the method, the confidence of the category and position information of the target can be reduced by judging the shielding state of the target vehicle, so that the decision error of the whole auxiliary driving system is reduced, the driving experience of a user is further optimized, and the driving safety is improved;
the vehicle sheltered state judging system based on the deep neural network has the advantages that the deep neural network can be used for simultaneously judging three sheltered information, namely whether the vehicle is sheltered, the sheltered position of the vehicle and the sheltered degree of the vehicle. By utilizing the feature sharing of the neural network, three pieces of state information of the shielded vehicle are synchronously output, and the network design is beneficial to saving the computing resources of the system in the module. Then we judge the three output occluded states in a grading way. Firstly, judging whether a target is shielded or not, then respectively judging whether the left side and the right side of the target are shielded or not when the target is shielded, and finally outputting the shielded degree of the target vehicle. Through grading judgment of the sheltered state of the vehicle, comprehensive information of the sheltered state of the vehicle can be provided for the auxiliary driving system, and the judgment accuracy of the sheltered state of the vehicle is effectively improved.
Drawings
FIG. 1 is a general flow chart of a system provided by an embodiment of the present invention;
FIG. 2 is a schematic view of a vehicle detection provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of occlusion categories provided by an embodiment of the invention;
FIG. 4 is a schematic diagram of the degree of occlusion provided by the embodiment of the present invention;
FIG. 5 is a schematic diagram of a vehicle occlusion determination module according to an embodiment of the invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
The first embodiment is as follows:
a method for judging the shielded state of a vehicle is mainly applied to judging the shielded state of a target vehicle in a vehicle-mounted auxiliary driving system. The target vehicle detected in the video is analyzed, so that the shielding state, the shielding position and the shielding degree of the target vehicle are obtained. The system is mainly divided into five parts: the method comprises the following steps of image acquisition, vehicle detection, judgment of the shielded state of the vehicle, judgment of the shielded position of the vehicle and judgment of the shielded degree of the vehicle, and comprises the following steps: acquiring a target image; carrying out target detection on the target image to obtain position information of a target vehicle; inputting the position information into a deep neural network for feature extraction to obtain shielded information of the target vehicle; the shielded information of the target vehicle comprises shielded state information, shielded position information and shielded degree information; after acquiring the vehicle condition information right ahead in the driving process of the vehicle, the target image is obtained by decompressing the video image frame by frame; after the target image is subjected to target detection through a YOLOV5 target detection algorithm, obtaining the position information of all target vehicles in the target image; after the position information is cut to obtain an image block of the target vehicle, zooming the image block to a specified scale, and sending the image block into a deep neural network for feature extraction; the occluded state information comprises occluded and non-occluded; the shielded position information comprises left shielding, right shielding and two-side shielding; the shielded degree information is a ratio of a portion of the target vehicle that is shielded to a portion shared between the target vehicle and the obstacle.
Referring to fig. 1, firstly, a vehicle-mounted camera acquires an image of a road scene in front of a vehicle, so as to obtain a road scene of a forward viewing angle that needs to be processed currently. We then detected the presence of the target vehicle in the image using a target detection method based on YOLOV 5. Preferably, all possible target vehicles are processed, and the shielded state, the shielded position and the shielded degree of the vehicle of each target in the picture are obtained by utilizing the deep learning neural network. By the aid of the system, the sheltered states of different vehicles in the vehicle-mounted auxiliary driving system at each moment are effectively obtained, and accordingly decision-making performance of the whole auxiliary driving system is effectively improved.
Example two:
a vehicle sheltered state judgment system is mainly divided into five modules which are respectively: the device comprises an image acquisition module, a vehicle detection module, a vehicle sheltered state judgment module, a vehicle sheltered position judgment module and a vehicle sheltered degree judgment module. Wherein the image acquisition module: and recording a road scene right in front of the vehicle through a vehicle-mounted Adas special image acquisition device. And extracting the video segments to obtain a target image needing to be processed. A target vehicle detection module: and detecting all vehicles in the target image by using the current advanced YOLO-V5 target detection framework to acquire the position information of all vehicles in the image. The vehicle is sheltered from the state judgment module: the position information of the target vehicle acquired by the detection network is processed, the target vehicle in the picture is cut out, and then the target vehicle is sent into the deep neural network to output the state information of whether the target vehicle is shielded. The vehicle is sheltered from the position and judges the module: the position information of the target vehicle acquired by the detection network is processed, the target vehicle in the picture is cut out, and then the target vehicle is sent into the deep neural network to output the shielded state information of the left side and the right side of the target vehicle respectively. The vehicle is sheltered from the state judgment module: the position information of the target vehicle acquired by the detection network is processed, the target vehicle in the picture is cut out, and then the target vehicle is sent into the deep neural network to output the shielding degree information of the target vehicle.
An image acquisition module: the vehicle-mounted monocular camera is mounted on the window glass, and vehicle condition information right in front of the vehicle is collected in the running process of the vehicle. The input size of the captured picture was 1280 × 720P. The video image is decompressed frame by frame, so that the image needing to be processed is obtained.
A target vehicle detection module: and the vehicle detection module adopts a Yolov5 target detection framework and performs target detection on the whole image so as to acquire the position information of all target vehicles in the image. See fig. 2 for the results of the detection.
The vehicle is sheltered from the judgement module: the vehicle sheltered judgment module judges the sheltered state of all the target vehicles obtained by the detection module through a deep neural network. Based on the position information provided by the detection network, the image blocks of the target vehicle required to be processed are cut out, and then the image blocks are scaled to 128 × 128, and the image blocks are fed into the deep neural network to extract features. The three modules of the shielded state, the shielded position and the shielded degree of the target vehicle share the same deep learning backbone network to extract features, and then the three modules are divided into three parts to output shielded information of the vehicle respectively. The first part directly judges whether the vehicle is shielded or not, the second part outputs shielding states of the left side and the right side of the target vehicle respectively, and the third part outputs the shielded degree of the target vehicle.
In terms of algorithms, the YOLOV5 target detection algorithm is employed herein to detect the presence of vehicles in a scene. Processing an original image, removing edges of the image to obtain 1280 x 1280, zooming the image to 1024 x 1024, and sending the image to a detection backbone network to extract features. Then, the network is respectively sampled by 32 times, 16 times and 8 times to obtain three detection heads, namely 32 x 32, 64 x 64 and 128 x 128, which are respectively used for detecting the target vehicles in three ranges of near, middle and far. On the aspect of data enhancement, a Mosaic data enhancement system is adopted, and the main idea is to cut four pictures randomly and splice the four pictures into one picture to serve as training data. The advantage of this is that the background of the pictures is enriched, and the four pictures are spliced together to improve the batch _ size in phase, and the four pictures are calculated when the batch normalization is performed, so that the batch _ size is not very dependent on the batch _ size. On the Loss function, we adopt CIOU Loss as the Loss of the target Bounding box. The loss function can simultaneously take into account the coverage area, the center distance and the length-width ratio between the prediction frame and the marking frame, so that the detection performance of the vehicle is effectively improved. We assume the prediction box is A and the labeled box is B, and the IOU of the two is as follows:
Figure RE-GDA0003266989370000071
the formula for CIOU Loss is then as follows:
Figure RE-GDA0003266989370000072
where ρ is2(b,bgt) Representing the Euclidean distance between the center points of A and B, c is the length of a diagonal of a minimum bounding rectangle between A and B, and v represents the distance of the length-width ratio between A and B, wherein v is defined as follows:
Figure RE-GDA0003266989370000073
α is a weighting factor, and α is defined as follows:
Figure RE-GDA0003266989370000074
vehicle occlusion is classified herein into four categories, no occlusion, left occlusion, right occlusion, and two occlusion, examples of which refer to FIG. 3. The method utilizes a numerical value between 0 and 1 to represent the degree of the tail part of the target vehicle being shielded, when the numerical value is 0, the target is not shielded, and when the numerical value is 1, the tail part of the target is completely shielded. Please refer to fig. 4 for the calculation of the degree of occlusion. The left side represents the shared portion between the target vehicle and the obstacle, the right side represents the portion of the target vehicle that is occluded, and the ratio of the right-occluded portion to the left-shared portion is used herein as the degree to which the target vehicle is occluded.
The vehicle detection module is used for detecting the vehicle in the image and acquiring the position of the target vehicle in the image. Suppose the coordinates of the upper left corner of the target vehicle in the image are (x)1,y1) The coordinate of the lower right corner is (x)2,y2) Let us center the target vehicle
Figure RE-GDA0003266989370000075
To center, and then cut to 1.2 max (x) side length2-x1,y2-y1) And scaling the image blocks to 128 × 128, and sending the image blocks to a deep neural network for feature extraction. After the characteristic extraction is completed, the network is divided into three branches, and the states that the vehicle is shielded are respectively carried outAnd judging the three shielding tasks of the shielded position and the shielded degree. The three tasks share the features of the same backbone network, so that not only can the calculated amount of the whole task be saved, but also the tasks can supplement each other, the performance of model occlusion judgment is effectively improved, the network structure of the vehicle occlusion judgment module is shown in fig. 5, wherein the size of an input network picture is 128 × 3, and the feature information extracted by the network feature extraction module is 4 × 64. The shielding state judgment is used for judging whether the target vehicle is shielded or not, the shielding position judgment is used for judging whether the left side and the right side of the target vehicle are shielded or not, and the shielding degree judgment is used for judging the shielding degree of the target vehicle.
Example three:
the embodiment of the invention also provides a device for judging the sheltered state of the vehicle, which comprises a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method of:
acquiring a target image;
carrying out target detection on the target image to obtain position information of a target vehicle;
inputting the position information into a deep neural network for feature extraction to obtain shielded information of the target vehicle;
the shielded information of the target vehicle comprises shielded state information, shielded position information and shielded degree information.
Example four:
an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the following method steps:
acquiring a target image;
carrying out target detection on the target image to obtain position information of a target vehicle;
inputting the position information into a deep neural network for feature extraction to obtain shielded information of the target vehicle;
the shielded information of the target vehicle comprises shielded state information, shielded position information and shielded degree information.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a system, method, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of systems, devices (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A method for judging the sheltered state of a vehicle is characterized by comprising the following steps:
acquiring a target image;
carrying out target detection on the target image to obtain position information of a target vehicle;
inputting the position information into a deep neural network for feature extraction to obtain shielded information of the target vehicle;
the shielded information of the target vehicle comprises shielded state information, shielded position information and shielded degree information.
2. The method according to claim 1, wherein the target image is obtained by decompressing the video image frame by frame after acquiring vehicle condition information right ahead of the vehicle in the driving process.
3. The method for determining the blocked state of the vehicle according to claim 1, wherein the target image is subjected to target detection by a YOLOV5 target detection algorithm to obtain the position information of all target vehicles in the target image.
4. The method for judging the shielded state of the vehicle according to claim 1, wherein the position information is cut to obtain an image block of the target vehicle, and then the image block is scaled to a specified scale and sent to a deep neural network for feature extraction.
5. The method according to claim 1, wherein the blocked state information includes blocked and non-blocked.
6. The method according to claim 1, wherein the blocked position information includes left-side blocking, right-side blocking, and both-side blocking.
7. The method for determining a blocked state of a vehicle according to claim 1, wherein the blocked degree information is a ratio of a portion where the target vehicle is blocked to a portion shared between the target vehicle and the obstacle.
8. A vehicle blocked state determination system, characterized by comprising:
an image acquisition module: acquiring a target image;
a target vehicle detection module: carrying out target detection on the target image to obtain position information of a target vehicle;
the vehicle is sheltered from the judgement module: inputting the position information into a deep neural network for feature extraction to obtain shielded information of the target vehicle;
the shielded information of the target vehicle comprises shielded state information, shielded position information and shielded degree information.
9. A vehicle sheltered state judgment device is characterized by comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to any one of claims 1 to 7.
10. Computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202110947023.3A 2021-08-18 2021-08-18 System and method for judging shielded state of vehicle Pending CN113657277A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110947023.3A CN113657277A (en) 2021-08-18 2021-08-18 System and method for judging shielded state of vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110947023.3A CN113657277A (en) 2021-08-18 2021-08-18 System and method for judging shielded state of vehicle

Publications (1)

Publication Number Publication Date
CN113657277A true CN113657277A (en) 2021-11-16

Family

ID=78480804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110947023.3A Pending CN113657277A (en) 2021-08-18 2021-08-18 System and method for judging shielded state of vehicle

Country Status (1)

Country Link
CN (1) CN113657277A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024051614A1 (en) * 2022-09-06 2024-03-14 华为技术有限公司 Driver assistance method and related apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024051614A1 (en) * 2022-09-06 2024-03-14 华为技术有限公司 Driver assistance method and related apparatus

Similar Documents

Publication Publication Date Title
CN108388879B (en) Target detection method, device and storage medium
WO2018103608A1 (en) Text detection method, device and storage medium
EP3168810B1 (en) Image generating method and apparatus
JP4157620B2 (en) Moving object detection apparatus and method
CN112132156A (en) Multi-depth feature fusion image saliency target detection method and system
CN109784290B (en) Target detection method, device, equipment and readable storage medium
CN112997190B (en) License plate recognition method and device and electronic equipment
CN111213153A (en) Target object motion state detection method, device and storage medium
CN112825192A (en) Object identification system and method based on machine learning
CN105095835A (en) Pedestrian detection method and system
Meus et al. Embedded vision system for pedestrian detection based on HOG+ SVM and use of motion information implemented in Zynq heterogeneous device
CN113657277A (en) System and method for judging shielded state of vehicle
US11458892B2 (en) Image generation device and image generation method for generating a composite image
CN111814773A (en) Lineation parking space identification method and system
CN108288041B (en) Preprocessing method for removing false detection of pedestrian target
CN113869163B (en) Target tracking method and device, electronic equipment and storage medium
CN114898306A (en) Method and device for detecting target orientation and electronic equipment
CN114445787A (en) Non-motor vehicle weight recognition method and related equipment
Egodawela et al. Vehicle Detection and Localization for Autonomous Traffic Monitoring Systems in Unstructured Crowded Scenes
CN109509205B (en) Foreground detection method and device
JP3032060B2 (en) Roadway recognition device for mobile vehicles
Wonneberger et al. Parallel feature extraction and heterogeneous object-detection for multi-camera driver assistance systems
KR20180069282A (en) Method of detecting traffic lane for automated driving
CN112597800B (en) Method and system for detecting sitting-up actions of students in recording and broadcasting system
US20230367806A1 (en) Image processing apparatus, image processing method, and non-transitory storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination